WO2020156487A1 - 场景识别方法及装置、终端、存储介质 - Google Patents

场景识别方法及装置、终端、存储介质 Download PDF

Info

Publication number
WO2020156487A1
WO2020156487A1 PCT/CN2020/074016 CN2020074016W WO2020156487A1 WO 2020156487 A1 WO2020156487 A1 WO 2020156487A1 CN 2020074016 W CN2020074016 W CN 2020074016W WO 2020156487 A1 WO2020156487 A1 WO 2020156487A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
scene
object model
target object
vertices
Prior art date
Application number
PCT/CN2020/074016
Other languages
English (en)
French (fr)
Inventor
张璠
邓一鑫
顾宝成
魏冬
柯胜强
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20748486.6A priority Critical patent/EP3905204A4/en
Publication of WO2020156487A1 publication Critical patent/WO2020156487A1/zh
Priority to US17/389,688 priority patent/US11918900B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/77Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/20Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform
    • A63F2300/209Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform characterized by low level software layer, relating to hardware management, e.g. Operating System, Application Programming Interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6692Methods for processing data by generating or executing the game program for rendering three dimensional images using special effects, generally involving post-processing, e.g. blooming

Definitions

  • This application relates to the field of computer graphics technology, and in particular to a scene recognition method and device, terminal, and storage medium.
  • the terminal manufacturer sets the terminal to recognize the game scene of the current image frame of the competitive game when running a competitive game, and add different special effects for different game scenes. For example, after recognizing the current game scene, the mobile terminal may add special effects such as sound or touch for different game scenes.
  • a scene recognition method is provided in the related art.
  • the method is based on image recognition technology. It establishes a training set based on the game images of multiple image frames in the game scene in advance, and uses image recognition technology to combine the game images of the current image frame and the training set. Match the game screens of multiple image frames to obtain the game screen of a certain image frame that matches the game screen of the current image frame in the training set, and use the game scene corresponding to the game screen of the matched certain image frame as the current The game scene corresponding to the game screen of the image frame.
  • the use process of the image recognition technology needs to consume a large amount of computing resources, which leads to a relatively high computing cost of the terminal.
  • This application provides a scene recognition method and device, terminal, storage medium, etc., which can save the computing resources of the system.
  • an exemplary embodiment of the present application provides a scene recognition method, the method includes: acquiring at least one drawing instruction for drawing a target object; determining the target to be drawn according to the description parameters of the at least one drawing instruction
  • the target object model of the object where the target object model is a rendering model used for image rendering, and the description parameter indicates vertex information of the target object model; the corresponding target scene is determined according to the target object model.
  • the scene recognition method provided by the embodiment of the application can determine the target scene according to the description parameters of the drawing instruction of the drawing object. Compared with the large amount of computing resources consumed by the image recognition technology adopted in the related technology, the embodiment of the application Providing a scene recognition method effectively saves system resources.
  • determining the corresponding target scene according to the target object model may be directly determining the corresponding target scene according to the target object model; in other implementation manners, determining the corresponding target scene according to the target object model may be based on the The target object model determines the corresponding object, and then determines the corresponding target scene according to the object. If the computer stores the correspondence between the object model and the scene, the former can be used. If the computer stores the correspondence between the object (such as a game character's identity) and the scene, the latter can be used. Or, according to other types of correspondences stored in the computer, the method provided in this application can be changed accordingly, and should not be regarded as departing from the scope of this application.
  • At least one drawing instruction in this application can be understood as “one or more instructions”.
  • one drawing instruction can draw one object, multiple drawing instructions can also draw one object, and one drawing instruction can also draw multiple objects, that is, the drawing instruction and the object can be one-to-one or one-to-many. Or a many-to-one relationship.
  • a drawing instruction is used to draw an object.
  • the "object” proposed in this application can be a human or an animal; it can be a static object or a dynamic object; it can exist in the background or in the foreground, etc. Not limited.
  • the target object model is obtained by matching vertex information with vertex information of one or more known object models.
  • the known vertex information of one or more object models may exist in the computer system in the form of a correspondence between the vertex information and the object model.
  • the corresponding relationship may be manually input or obtained through machine learning (equivalent to artificial intelligence).
  • this application since the vertex information carried in the description parameters are different, this application provides at least two implementation methods for determining the target object model.
  • the process of determining the target object model may include: obtaining m vertex sets from the description parameters , M is a positive integer; from the m vertex sets, determine the target vertex set matching the vertex set of one or more known object models; determine the object model corresponding to the target vertex set as the target object model.
  • the number of vertices in a vertex set is called the "number of vertices”.
  • the process of determining the target vertex set matching the vertex set of one or more known object models may include: determining the m vertex sets In the vertex set, the vertex set with the same content and the same number of vertices as the vertex set of one or more known object models is the target vertex set. That is, "matching" means that the number of vertices in the two vertex sets is the same, and the coordinates of the vertices inside the two vertex sets are also the same. If the vertices in the vertex set are in order, then the vertices in the two vertex sets should have a one-to-one correspondence in order and have the same coordinate values.
  • the matching target vertex set is determined based on the number of vertices in the vertex set first, and then based on the content of the vertex set. Since the number of vertices of different object models is usually different, the rapid screening by the number of vertices first, and then the precise screening by the content of the vertex set, not only improves the efficiency of determining the target object model, but also ensures the certainty. The accuracy of the target object model.
  • the process of obtaining m sets of vertices from the description parameters includes: obtaining all the sets of vertices contained in the description parameters, and using the set of all vertices as the set of m sets of vertices; or, filtering some of the description parameters Vertex set to get the set of m sets of vertices.
  • the vertex information includes the number of vertices of the object model
  • the target object model is an object model in which the number of vertices in one or more known object models is equal to the number of vertices indicated by the description parameter.
  • the description parameter is used to "indicate” the vertex information, so in this implementation, the description parameter is used to "indicate” the number of vertices. Therefore, the description parameter can directly carry the number of vertices in each vertex set, or it does not carry the number of vertices, but only the set of vertices. Because of the set of vertices, the computer system can calculate the number of vertices in the set.
  • the process of determining the target object model may include: obtaining the number of n vertices from the description parameters of the at least one drawing instruction , N is a positive integer; the number of each of the n vertices is regarded as the first number of vertices, and at least one candidate object model is determined in one or more known object models, the at least Each candidate object model in a candidate object model has a first number of vertices, and one or more known object models record the number of multiple vertices corresponding to multiple known object models; when this When the number of candidate object models is 1, the candidate object model is determined as the target object model corresponding to the first vertex number; when the number of candidate object models is greater than 1, the candidate object model Select the target object model corresponding to the first number of vertices.
  • the target object model also needs to meet the following conditions: the target object model matches the target virtual environment, where the target virtual environment corresponds to the image frame to which the target object to be drawn by at least one drawing instruction belongs, and the target The virtual environment is a virtual environment created by an application program that generates the aforementioned at least one drawing instruction, and is an interactive two-dimensional or three-dimensional environment generated by the application program on the terminal.
  • the process of determining whether the target object model matches the target virtual environment may include: determining the target virtual environment; selecting the object model with the highest similarity to the object model in the target virtual environment from the candidate object models as the target object model.
  • the process of obtaining at least one drawing instruction includes: obtaining the at least one drawing instruction by monitoring an OpenGL interface, where the OpenGL interface is an interface between OpenGL and an application program.
  • the process of acquiring the at least one drawing instruction by monitoring an OpenGL interface may include: performing at least one monitoring acquisition process to obtain the at least one drawing instruction.
  • the monitoring acquisition process includes: monitoring a first OpenGL interface, the first OpenGL interface being an interface between OpenGL and a specified application, and the specified application is used to draw image frames in the target virtual environment
  • the first OpenGL interface after listening to the end indication information used to indicate the completion of the drawing of an image frame, obtain the drawing of the drawn image frame in the OpenGL cache through the second OpenGL interface Instruction, the second OpenGL interface is connected to OpenGL.
  • the time node for acquiring the drawing instruction of the completed image frame can be accurately known, and at least one drawing instruction is acquired according to the time node, Improve the effectiveness of the scene recognition method.
  • the method before determining the target object model according to the description parameters of the at least one drawing instruction, the method further includes: determining a target virtual environment corresponding to the image frame to which the target object to be drawn by the at least one drawing instruction belongs. ; Acquire the first corresponding relationship between the description parameter and the object model in the target virtual environment, the first corresponding relationship may include the corresponding relationship between the description parameter and the object model, or the corresponding relationship between the description parameter, the object model, and the object, The corresponding relationship between the description parameter and the object may also be included. Of course, since the object corresponds to the object model, the first corresponding relationship may also include the corresponding relationship between the description parameter and the object model.
  • the process of determining the target object model according to the description parameter of the at least one drawing instruction may include: querying the first correspondence to obtain the target object model corresponding to the description parameter.
  • the method further includes: obtaining a scene judgment strategy in the target virtual environment from a set of multiple relationships, and the multiple relationship sets record different multiple virtual environments
  • the scene judgment strategy is a strategy for judging the corresponding scene based on the object model, or the scene judgment strategy is a strategy for judging the corresponding scene based on the object identifier. If the scene judgment strategy is a strategy for judging the corresponding scene based on the object model, determining the corresponding target scene according to the target object model includes: determining the target scene according to the target object model and the scene judgment strategy of the target object model in the target virtual environment.
  • the corresponding target scene is determined according to the target object model, including: scene judgment based on the target object ID corresponding to the target object model and the target object ID in the target virtual environment Strategy, determine the target scenario.
  • the first corresponding relationship between the description parameter and the object model and the scene judgment strategy in the target virtual environment are acquired, so that the acquired first correspondence and scene judgment strategy can match the target virtual environment, avoiding Obtain the situation where the first correspondence and the scene judgment strategy that do not match occupy the memory space of the terminal.
  • the target virtual environment is a game environment
  • the target scene is judged based on whether the target object model contains guns and bullets, whether it contains multiple characters, or whether a certain character appears for the first time.
  • the scene judgment strategy includes one or more of the following: when the target object model includes object models corresponding to guns and bullets (or when the target object includes guns and bullets), it is determined that the target scene is shooting Scene; or, when the target object model includes object models corresponding to at least three characters, determine that the target scene is a team battle scene; or, when the target object model in the first image frame includes the object model corresponding to the target character And when the target object model in the previous image frame of the first image frame does not include the object model corresponding to the target character, it is determined that the target scene is the scene where the target character appears.
  • the "at least three roles" here can be different types of roles or the same type of roles.
  • the first image frame includes two consecutive image frames.
  • obtaining the first corresponding relationship between the description parameter and the object model in the target virtual environment includes: querying the first corresponding relationship stored in a local storage device; or from the information associated with the target virtual environment The server downloads the first correspondence.
  • the scene recognition method provided by the embodiment of the present application obtains at least one drawing instruction for drawing a target object, and then determines the target object model of the target object to be drawn according to the description parameters of the at least one drawing instruction, and finally according to the target object model Determine the corresponding target scene. Since the target scene is determined according to the target object model, and the target object model is determined according to the description parameters of the drawing instruction, compared with the large amount of computing resources consumed by the image recognition technology adopted in the related technology, the embodiment of the present application provides The scene recognition method effectively saves system resources.
  • an exemplary embodiment of the present application provides a scene recognition device, which includes one or more modules, and the one or more modules are used to implement the scene recognition method described in any one of the foregoing first aspects.
  • the scene recognition device acquires at least one drawing instruction for drawing a target object, and then determines the target object model of the target object to be drawn according to the description parameters of the at least one drawing instruction, and finally according to the target object model Determine the corresponding target scene. Since the target scene is determined according to the target object model, and the target object model is determined according to the description parameters of the drawing instruction, the terminal does not need to perform additional calculations, which consumes a lot of computing resources compared to the image recognition technology used in related technologies , The scene recognition method provided by the embodiment of the present application effectively saves system resources.
  • an embodiment of the present application provides a computer, such as a terminal device, including a processor and a memory; the processor executes a computer program stored in the memory to implement the scene recognition method described in the first aspect.
  • an embodiment of the present application provides a storage medium, the storage medium may be non-volatile, a computer program is stored in the storage medium, and the computer program is used to implement the scene recognition described in the first aspect. method.
  • the embodiments of the present application provide a computer program product (or computer program) containing instructions.
  • the computer program product When the computer program product is run on a computer, the computer executes the scene recognition method described in the first aspect. .
  • the scene recognition method, device, terminal, and storage medium provided by the present application acquire at least one drawing instruction for drawing a target object, and then determine the target object model of the target object to be drawn according to the description parameters of the at least one drawing instruction, and finally Determine the corresponding target scene according to the target object model. Since the target scene is determined according to the target object model, and the target object model is determined according to the description parameters of the drawing instruction, compared with the large amount of computing resources consumed by the image recognition technology used in related technologies, the terminal does not need to perform a lot of extra By calculation, the scene recognition method provided by the embodiment of the present application effectively saves system resources.
  • FIG. 1 is a schematic diagram of an application environment involved in a scene recognition method provided by an embodiment of the present application
  • FIG. 2 is a flowchart of a scene recognition method provided by an embodiment of the present application
  • FIG. 3 is a flowchart of a process of determining a target object model provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of determining a corresponding target scene according to an acquired model identifier according to an embodiment of the present application
  • FIG. 5 is a schematic diagram of a shooting scene as a target scene provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a target scene provided by an embodiment of the present application as a team battle scene
  • FIG. 7 is a schematic diagram of a scene in which an enemy appears in a target scene provided by an embodiment of the present application.
  • FIG. 8 is a flowchart of a scene recognition method provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a scene recognition device provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a second acquisition module provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of another scene recognition device provided by an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of another scene recognition apparatus provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of the deployment form of the scene recognition device in the software and hardware of the terminal provided by an embodiment of the present application;
  • FIG. 14 is a schematic diagram of the deployment form of the scene recognition device in the software and hardware of the terminal provided by the embodiment of the present application;
  • FIG. 15 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • the terminal can support and run a variety of large-scale competitive games.
  • the terminal can automatically recognize the game scene of the current image frame of the competitive game, and add different content items for different game scenes.
  • the content item can include special effects or advertisements.
  • the mobile terminal can add special effects such as sound or touch for different game scenes, or put special effects for different game scenes. Different ads. Therefore, effectively judging the game scene of the current image frame is essential for improving the user experience and increasing the revenue of the advertiser.
  • the first is based on image recognition technology, which matches the game screen of the current image frame with the game screen of multiple image frames in the pre-established training set.
  • the current image frame matches a certain
  • the game scene corresponding to the game screen of the certain image frame is taken as the game scene corresponding to the game screen of the current image frame.
  • the second method is to query a limited number of game scenes through part of the application program interface open in the game's Software Development Kit (SDK).
  • SDK Software Development Kit
  • the use of image recognition technology needs to consume a lot of computing resources, resulting in a high computational cost for the terminal, and if it is necessary to determine the game scene corresponding to the multi-frame game screen or determine the large-scale 3D game Complicated game scenes will seriously affect the performance of the terminal, for example, the terminal may freeze or consume too much power.
  • the second scene recognition method because game companies often only open application program interfaces to some terminal manufacturers, the versatility of the scene recognition method is low, and the number of game scenes that can be queried is also limited, resulting in the second scene
  • the recognition method has greater limitations.
  • OpenGL Open Graphics Library, open graphics library
  • the API includes an interface for drawing two-dimensional images or three-dimensional images
  • the interface includes drawing functions, such as the drawing function glDrawElements()
  • the interface includes rendering Functions, such as the function eglSwapBuffers()
  • functions in OpenGL can be called by instructions.
  • a drawing function can be called by drawing instructions to draw a two-dimensional image or a three-dimensional image.
  • two-dimensional images or three-dimensional images are collectively referred to as image frames.
  • one drawing instruction can draw one object
  • multiple drawing instructions can also draw one object
  • one drawing instruction can also draw multiple objects, that is, drawing instructions and objects can be One-to-one, one-to-many, or many-to-one relationships.
  • a drawing instruction is used to draw an object.
  • OpenGL ES OpenGL for Embedded Systems, an open graphics library for embedded systems
  • PDAs Personal Digital Assistant, PDA
  • game consoles in OpenGL are a graphics library designed for embedded devices such as mobile phones, PDAs (Personal Digital Assistant, PDA), and game consoles in OpenGL.
  • the virtual environment is an interactive two-dimensional or three-dimensional environment generated on the terminal by an application program using a computer graphics system and various interfaces.
  • a scene is a scene in a virtual environment, and each scene is constructed by one or more virtual objects.
  • the application program is a game application program
  • the corresponding virtual environment is a game environment.
  • game scenes in the game environment such as team battle scenes or gun battle scenes.
  • Figure 1 is a schematic diagram of the application environment involved in the scene recognition method.
  • the scene recognition method is executed by the scene recognition device.
  • the scene recognition device is located in the terminal.
  • the scene recognition device can be installed in the terminal through software Realization, there are multiple applications running in the terminal, and the data of each application goes through the OpenGL interface layer, cache and OpenGL application driver in turn.
  • the OpenGL interface layer is used to provide functions in OpenGL to be called by the application program.
  • the cache is used to store instructions for the application to call functions in the OpenGL interface layer, and the OpenGL application state driver is used to pass the instructions called by the application to the GPU for the GPU to execute the instructions.
  • the scene recognition device provided in this embodiment of the application is compatible with An interface is set between the caches for scene recognition through the data in the cache.
  • the application program can call the above-mentioned drawing functions in OpenGL or OpenGL ES from the OpenGL interface layer by executing drawing instructions to implement the relevant functions of drawing two-dimensional images or three-dimensional images in the application.
  • the application is a game application
  • the game application can call a drawing function by executing a drawing instruction to draw a two-dimensional image or a three-dimensional image used to present the game scene in the game.
  • the method includes:
  • Step 101 The scene recognition apparatus obtains at least one drawing instruction, which is used to draw a target object.
  • the first image frame is the image frame that currently needs to be drawn, that is, the image frame to which the target object to be drawn by the at least one drawing instruction belongs, which corresponds to the current scene, and the first image frame may include an image frame, or At least two consecutive image frames, for example, the first image frame includes two consecutive image frames.
  • the scene recognition device may obtain one drawing instruction every time it performs scene recognition. However, in order to ensure the accuracy of the scene recognition result, the scene recognition device may obtain at least two drawing instructions each time it performs scene recognition.
  • the scene recognition apparatus may obtain the at least two drawing instructions in a specified order, and the at least two drawing instructions arranged in the specified order are also referred to as a drawing instruction stream.
  • the specified order is the order in which the application program calls the drawing instructions.
  • the designated application in the terminal can draw the first image frame by calling multiple drawing instructions, and each drawing instruction can draw an object in the first image frame, or each drawing instruction can draw in the first image frame. Multiple objects in, or multiple drawing instructions can draw an object in the first image frame.
  • the scene recognition apparatus can read multiple drawing instructions corresponding to the image frame from the memory, and obtain at least one drawing instruction from the memory for use in scene recognition. If the first image frame is at least two consecutive image frames, the scene recognition device can read the multiple drawing instructions corresponding to each of the at least two image frames from the memory, and obtain at least one of them. Drawing instructions are used for scene recognition.
  • the scene recognition device obtains the at least one drawing instruction by monitoring an OpenGL interface, which is an interface between OpenGL and an application program.
  • the process of acquiring the at least one drawing instruction by monitoring the OpenGL interface may include: the scene recognition apparatus executes at least one monitoring acquisition process to obtain the at least one drawing instruction.
  • the monitoring acquisition process may include:
  • Step A1 Monitor a first OpenGL interface, which is an interface between OpenGL and a designated application program, and the designated application program is used to draw objects in an image frame in a target virtual environment.
  • the target virtual environment corresponds to the image frame to which the target object to be drawn by at least one drawing instruction belongs
  • the target virtual environment is a virtual virtual environment created by the application that generates the aforementioned at least one drawing instruction (ie, the specified application described above)
  • the environment is an interactive two-dimensional or three-dimensional environment generated by the application on the terminal, and the user can be immersed in the environment.
  • the first OpenGL interface is a designated interface in the program programming interface defined by OpenGL, and a function can be added to the designated interface.
  • the function can be used to issue an end instruction message.
  • the Specify an interface to automatically trigger the function to issue an end instruction message indicating that the image frame is drawn.
  • Step A2 In the first OpenGL interface, after listening to the end indication information for indicating the completion of the drawing of an image frame, obtain the drawing instruction of the completed image frame in the OpenGL buffer through the second OpenGL interface. Two OpenGL interface connects OpenGL.
  • the drawing instructions used to draw the image frame can be stored in the OpenGL cache.
  • the OpenGL cache can correspond to a storage area in the memory (also Called storage space), the memory may be implemented by random access memory (Random Access Memory, RAM).
  • the second OpenGL interface is a designated interface in the program programming interface defined by OpenGL, and the scene recognition device can obtain a drawing instruction for drawing the image frame in the buffer through the second OpenGL interface.
  • the application can call the drawing function in OpenGL to draw the image.
  • the drawing instructions that call the drawing function will be cached in the memory.
  • the memory will send these drawing instructions to the graphics card memory through the CPU (referred to as video memory).
  • video memory also known as the frame buffer, which is used to store the rendering data processed by the graphics card chip or will be extracted).
  • the application draws the image frame according to the drawing instructions sent to the graphics memory, and calls the display in OpenGL
  • the interface displays the drawn image frame on the display interface.
  • this embodiment of the application may choose to use the display interface in OpenGL as the first OpenGL interface, and add the calling function to the display interface .
  • the display interface can be the eglSwapBuffer interface.
  • the designated application can display the drawn image frame on the display interface by calling the eglSwapBuffer interface, then the eglSwapBuffer interface is added for sending End the function indicating the information.
  • the function is triggered to issue an end instruction message.
  • the scene recognition device can obtain the drawing instruction of the completed image frame in the OpenGL buffer through the second OpenGL interface after listening to the end instruction information for indicating the completion of drawing of an image frame in the eglSwapBuffer interface.
  • step A1 the scene recognition device needs to register and listen to the called event of the first OpenGL interface in the registration list of the scene recognition device; after performing step A2, the scene recognition device needs to register with the scene recognition device Unregister in the list to listen to the called event of the first OpenGL interface.
  • Step 102 The scene recognition apparatus determines a target object model of the target object to be drawn according to the description parameter of the at least one drawing instruction.
  • step 102 may include:
  • Step 1021 The scene recognition apparatus determines the description parameter of the at least one drawing instruction.
  • the drawing instruction When an application calls a drawing function in OpenGL through a drawing instruction, the drawing instruction will include the relevant parameters of the object corresponding to the drawing instruction, and the relevant parameters include description parameters for describing the object.
  • the scene recognition device can read from the at least one drawing instruction
  • the description parameter is extracted from each drawing instruction in the drawing instruction, that is, the scene recognition device can determine at least one description parameter based on each drawing instruction in the at least one drawing instruction.
  • the object can be described by parameters such as color and shape.
  • an object can be characterized by an object model, so the description parameter can be a related parameter used to describe the object model, and the object model is a rendering model established for the object for image rendering.
  • the object model is composed of multiple vertices on the surface of the object (the multiple vertices are also called discrete lattices or model point clouds, etc.). Therefore, the description parameter is a parameter that can indicate the vertex information of the object model.
  • different object models can have different numbers of vertices. Therefore, according to the number of vertices of the object model, the object model corresponding to the number of vertices can be determined; in addition, each vertex has different attribute information,
  • the attribute information can include information such as the position coordinate value of each vertex and texture data.
  • the attribute information of multiple vertices in different object models is not the same. Therefore, the attribute information of the vertices in the object model can be determined The object type.
  • the vertex information may include the number of vertices of the object model and the attribute information of each vertex.
  • the parameter value corresponding to count is 4608. Since every three parameter values are used to represent a vertex, that is, the relationship between the parameter value and the number of vertices is 3:1, then the parameter value is 4608 The number of vertices of the corresponding object model is 4608/3, which is 1536. Therefore, the scene recognition device can determine a description parameter from the drawing instruction, that is, the number of vertices of the object model.
  • the parameters corresponding to different drawing commands may be different, so the methods for obtaining the description parameters from the parameters may be different, and the embodiments of the present application will not give examples one by one here.
  • Step 1022 The scene recognition device determines the target object model of the target object to be drawn according to the description parameter.
  • the target object model is a rendering model established for the target object for image rendering.
  • a drawing instruction can correspond to one description parameter or multiple description parameters, and each description parameter can correspond to a target object model.
  • the scene recognition module can determine according to the description parameter of each drawing instruction One or more target object models to be drawn.
  • the object may correspond to an object identifier, and the object identifier may be used to uniquely identify an object in a designated application.
  • the object identifier may include the name of the object, the posture of the object, and the role name of the object. Or the name of the item, etc.
  • the object model can correspond to a model identifier, and the model identifier can be used to uniquely identify an object model in a designated application.
  • the object corresponds to the object model. Therefore, the object identifier corresponds to the object model and the model identifier.
  • One corresponding object identifier can correspond to multiple model identifiers.
  • an object identifier can also correspond to a model identifier.
  • Both the model identification and the object identification can be represented in the form of a character string, and the character string includes numbers, letters and/or text.
  • the object identification can be the name or code of the object.
  • the object identifier and the model identifier can be the same identifier, that is, the two identifiers are multiplexed.
  • the object identifier and the model identifier can also be different identifiers; when the object and the object When the model has a one-to-many relationship, the object ID and the model ID can also be different IDs, for example, the model ID is an index label, and the object ID is the object name.
  • the target object model is obtained by matching vertex information with vertex information of one or more known object models.
  • the known one or more object models can be stored in the terminal in association with the model identifier (such as index number, etc.) corresponding to each object model, and the model identifier corresponding to the model identifier can be uniquely obtained in the terminal through the certain model identifier
  • the object model (for example, the model ID points to the object model through a pointer).
  • the description parameters indicating the vertex information are also different, and the method for the scene recognition apparatus to determine the target object model of the target object to be drawn according to the description parameters is also different.
  • the embodiment of the present application provides two Determine the realization method of the target object model of the target object to be drawn.
  • the vertex information includes the attribute information of the vertex.
  • the attribute information may be a vertex set.
  • Each object model has a certain number of vertices, and each vertex corresponds to a vertex set represented by three-dimensional coordinate values.
  • the vertex set can be used to represent the position coordinate value in the attribute information of the vertex.
  • a vertex set includes The coordinate value of one or more vertices.
  • the number of vertices and a set of vertices possessed by the vertices with the number of vertices can be used as the vertex information indicated by the description parameter to describe an object (the following description uses the object name as the object identifier to characterize the object) Object model.
  • a vertex set can be represented in the form of a vertex array.
  • a vertex array refers to one or more vertices represented in the form of an array, where the array refers to a sequence of ordered elements.
  • the number of vertices in the object model is 361, and a vertex array corresponding to the 361 vertices is ⁇ -0.66891545,-0.29673234,-0.19876061>, ⁇ -0.5217651 ,0.022111386,-0.3163959>, ⁇ -0.84291315,-0.103498506,-0.14875318>,... ⁇ , this vertex array includes 361 arrays, each array represents a vertex, each array includes 3 elements, These 3 elements represent the coordinate value of a vertex, and the number of vertices and the vertex array can be used as description parameters of the tree to describe the object model of the tree.
  • the process of determining the target object model of the target object to be drawn may include:
  • Step Y1 The scene recognition device obtains m sets of vertices from the description parameters, where m is a positive integer.
  • the manner in which the scene recognition apparatus obtains m sets of vertex arrays from the determined description parameters may include the following two optional implementation manners:
  • the scene recognition device may obtain all the vertex sets included in the description parameter, and use all the vertex sets as the m vertex sets.
  • the first optional implementation manner can traverse all the vertex sets in the determined description parameters, which ensures the accuracy of scene recognition.
  • the terminal will have a higher computational cost. Therefore, in order to improve the operational efficiency of the terminal, the second alternative implementation can be selected , That is, select part of the description parameters in the determined description parameters, and use the set of vertices in the part of the description parameters as the set of m vertices to reduce the computational cost.
  • the process of screening part of the set of vertices in the description parameter may include screening part of the description parameter in the description parameter, and determining the set of vertices corresponding to the part of the description parameter as a set of m vertices, and further, screening part of the description parameter in the description parameter
  • the process may include: randomly selecting a part of the description parameters from the determined description parameters; or, counting the appearance frequency of the drawing instructions corresponding to the determined description parameters, and selecting the description parameters corresponding to the drawing instructions with a lower frequency. Since the objects drawn by the drawing instructions with less frequent occurrences are special, it helps to accurately determine the target scene.
  • Step Y2 The scene recognition device determines a target vertex set matching the vertex set of one or more known object models from the m vertex sets.
  • the scene recognition device may determine a vertex set that has the same content as the vertex set of one or more known object models and contains the same number of vertices among the m vertex sets as the target vertex set, and the determined target
  • the vertex collection process can include:
  • Step S1 The scene recognition device respectively uses each vertex set in the m vertex sets as the first vertex set, and determines at least one candidate vertex set from the known vertex sets of one or more object models.
  • Each vertex set in a candidate vertex set has a first number of vertices, and the vertex sets of one or more known object models record multiple sets of vertices corresponding to known object identifiers.
  • the number of vertices is the number of vertices in the first set of vertices.
  • Table 1 schematically shows the correspondence between the set of vertices recorded with multiple known object models and other parameters.
  • the other parameters include the index label, the number of vertices of the object model, and the object identifier.
  • the object ID is represented by the name of the object
  • the model ID is represented by the index label
  • the object ID corresponds to the index label one-to-one.
  • the number of vertices and the set of vertices of the object model can determine the number of vertices of the object model and the object identifier of the object described by the set of vertices of the object model.
  • the number of vertices of the object model with the object identification "King" is 1243, and the corresponding vertex set is ⁇ -0.65454495,-0.011083424,-0.027084148>, ⁇ -0.8466003,0.026489139,-0.14481458>, ⁇ -0.84291315 ,-0.103498506,-0.14875318>... ⁇ , and the index number of the object model whose object is identified as "king" is 25.
  • Table 1 is only a schematic table. In actual implementation, other parameters in Table 1 above may only include at least two of the index number, the number of vertices of the object model, and the object identifier, or include the index number, Any one of the number of vertices of the object model and the object identification.
  • step S1 the scene recognition device respectively uses each vertex set in the m vertex sets as the first vertex set, and determines at least one candidate vertex from the vertex set of one or more known object models
  • the collection process includes: each of the m vertex collections is used as the first vertex collection, and the collection screening process is performed.
  • the collection screening process may include:
  • Step B1 The scene recognition device determines the number of first vertices in the first set of vertices.
  • the scene recognition device selects any vertex set in the description parameter, and determines the number of first vertices that the vertex set has.
  • the scene recognition apparatus can also directly use the number of vertices extracted in the drawing instruction as the first number of vertices, and determine the set of vertices corresponding to the first number of vertices as the first set of vertices.
  • Step B2 The scene recognition device detects whether there is a vertex set corresponding to the first vertex number in the vertex set of the known one or more object models.
  • Step B3 When there is a vertex set corresponding to the first vertex number in the vertex set of the known one or more object models, the scene recognition device will correspond to the vertex set corresponding to the first vertex number Determine the set of candidate vertices.
  • Each vertex set in the candidate vertex set has the first vertex number.
  • Step B4 When there is no vertex set corresponding to the first vertex number in the vertex set of the known one or more object models, the scene recognition device updates the next vertex set of the first vertex set to The first vertex is set, and the set screening process is performed again.
  • the scene recognition device may establish the set of m vertices as a queue of length m, starting from the first element of the queue, and sequentially performing the set screening process from step B1 to step B4. , Until all the elements in the queue are traversed.
  • Step S2 The scene recognition device compares the first vertex set with each vertex set in the at least one candidate vertex set to obtain a target vertex set matching the first vertex set.
  • the matching means that the vertex set has the same content and contains the same number of vertices.
  • the at least one candidate vertex set can also be compared by comparing the content of the vertex set (for example, the coordinate values in the vertex set). Carry out further screening to achieve the effect of accurately determining the object identification.
  • Step Y3 The scene recognition device determines the object model corresponding to the target vertex set as the target object model.
  • the scene recognition device may determine the model identifier corresponding to the target vertex set by querying the correspondence relationship or the like, and determine the object model corresponding to the model identifier as the target object model.
  • the corresponding relationship can be stored in a table.
  • the target vertex set is ⁇ -0.66891545,-0.29673234,-0.19876061>, ⁇ -0.5217651,0.022111386,-0.3163959>, ⁇ -0.84291315,-0.103498506,-0.14875318>... ⁇ , query table 1 to get the model ID, that is If the index number is 27, the target object model is a "tree" model.
  • the description parameters are selected As the set of m vertices, the accuracy of scene recognition can be guaranteed, the calculation cost is reduced, and the efficiency of scene recognition is improved.
  • the vertex information of the target object model indicated by the description parameter includes the number of vertices of the object model.
  • the target object model is a rendering model for image rendering. The parameter determines the target object model of the target object to be drawn, which is an object model in which the number of vertices in the known object model is equal to the number of vertices indicated by the description parameter.
  • the process of the scene recognition apparatus determining the target object model of the target object to be drawn may include:
  • Step Y4 The scene recognition device obtains the number of n vertices from the description parameter of at least one drawing instruction, where n is a positive integer.
  • the number of n vertices may be the number of all vertices included in the description parameter determined by the scene recognition device, or may be the number of partial vertices selected by the scene recognition device in the determined description parameter.
  • Step Y5 The scene recognition device uses the number of each of the n vertices as the first number of vertices, and determines at least one candidate object model in the known object model.
  • Each object model has a first number of vertices, and the known object model records the number of vertices corresponding to one or more known object models.
  • Table 2 schematically shows the correspondence between the number of vertices recorded with multiple known object models and other parameters.
  • the other parameters include index labels and object identifiers.
  • the object identifier passes through the object.
  • the object model is represented by the index label, and the object ID corresponds to the index label.
  • the object model can be determined by the number of vertices of the object model
  • Table 2 is only a schematic table. In actual implementation, the other parameters in Table 2 above may only include an index label or an object identifier.
  • Step Y6 When the number of candidate object models is 1, the scene recognition device determines the candidate object model as the target object model corresponding to the first number of vertices.
  • the number of vertices determined by the scene recognition device is 1068.
  • Table 2 only the number of vertices corresponding to index number 26 is 1068, it can be determined that the number of candidate object identifiers is 1, and the scene recognition device will "soldier The object model of "is determined as the target object model corresponding to the first vertex number.
  • Step Y7 When the number of candidate object models is greater than 1, the scene recognition device selects the target object model corresponding to the first vertex number in the candidate object identification.
  • the number of vertices determined by the scene recognition device is 361.
  • the number of vertices corresponding to the object identifiers 27 and 28 are both 361. It can be determined that the number of candidate object models is 2, and the scene The recognition device further selects the target object model among the object models of "trees" and "pistols".
  • the process of the scene recognition apparatus selecting the target object model from the candidate object models may include:
  • Step C1 The scene recognition device determines the target virtual environment to which the first image frame belongs.
  • the target virtual environment is the virtual environment to which the image frame (ie, the first image frame) currently needs to be drawn belongs, and is an interactive two-dimensional or three-dimensional environment generated by a designated application on the terminal.
  • a designated application can correspond to a target virtual environment; different target virtual environments can correspond to different objects.
  • the target virtual environment is the game environment in the game application, and different game applications can have different types of game environments.
  • gun battle game applications usually include pistols. , Rifles, jeep, etc.
  • historical game applications usually include various historical figures; for another example, when the designated application is a payment application, the target virtual environment may be a transaction environment, and the transaction environment Usually includes objects such as weights or currency.
  • Each target virtual environment can correspond to multiple object models.
  • multiple object models that can be used to represent the target virtual environment can be selected from the multiple object models.
  • the multiple object models It can be multiple object models appearing in the target virtual environment, or a unique object model in the target virtual environment, so as to determine the target object model.
  • the process of selecting multiple object models that can be used to characterize the target virtual environment can be completed by the application developer.
  • the scene recognition device in the terminal can automatically obtain the selected result .
  • Step C2 The scene recognition device selects an object model matching the target virtual environment from among the candidate object models as the target object model.
  • the scene recognition device can compare the candidate object model with multiple object models that may appear in the target virtual environment. If a candidate object model is consistent with a certain object model, the candidate object model can be determined as Target object model.
  • the candidate object models are the object models of "trees" and "pistols"
  • the object model of "trees” appears in the historical game
  • the multiple object models that may appear in the environment and obviously, the "pistol” cannot appear in multiple object models that characterize the historical game environment. Therefore, “trees” can be determined as the target object model with 361 vertices.
  • the type of the candidate object model can also be compared with the types of multiple object models that may appear in the target virtual environment. If the type of a candidate object model is compared with a certain type in the target virtual environment When the two types are consistent, the candidate object model can be determined as the target object model.
  • the scene recognition method may further include:
  • Step D1 The scene recognition device determines a target virtual environment, the target virtual environment is the virtual environment to which the first image frame belongs, and the first image frame is the image frame to which the target object to be drawn by at least one drawing instruction belongs.
  • Step D2 The scene recognition device acquires the first corresponding relationship between the description parameter and the object model in the target virtual environment.
  • the first corresponding relationship may include the corresponding relationship between the description parameter and the object model, or the corresponding relationship between the description parameter, the object model, and the object, or the corresponding relationship between the description parameter and the object.
  • the first correspondence may also include the correspondence between the description parameter and the object model.
  • the embodiment of the present application takes the first correspondence including the correspondence between the description parameter and the object model as an example for description.
  • the process of the scene recognition apparatus determining the target object model of the target object to be drawn according to the description parameters may include: the scene recognition apparatus queries the first correspondence relationship to obtain the target object model corresponding to the description parameters.
  • Different target virtual environments can correspond to different first correspondences between description parameters and object models. Therefore, in order to improve the accuracy and efficiency of determining the object model of the object described by the description parameters, the first correspondence corresponding to the target virtual environment can be obtained in advance relationship.
  • the developer of the application program may provide the first correspondence between the description parameters corresponding to the application program and the object model.
  • the first correspondence may be stored in a local storage device of the terminal.
  • the first correspondence may be stored in the flash memory of the terminal (for example, when the terminal is a mobile terminal, the flash memory may be an Embedded MultiMedia Card). , EMMC)), or the first corresponding relationship may be stored in a server associated with the target virtual environment, for example, stored in the cloud associated with the target virtual environment.
  • EMMC Embedded MultiMedia Card
  • the process for the scene recognition apparatus to obtain the first corresponding relationship between the description parameter and the object model in the target virtual environment may include: querying whether the first corresponding relationship in the target virtual environment is stored in the local storage device, and when the local storage device The first corresponding relationship in the target virtual environment is not stored.
  • the first corresponding relationship in the target virtual environment can be downloaded from the server associated with the target virtual environment.
  • the downloaded first corresponding relationship in the target virtual environment can be stored in Local storage device.
  • the first correspondence can be stored in the form of a database file (for example, gamemodel.db).
  • each target virtual environment can correspond to a database file, and the first correspondence stored in the local storage device (ie, the database file)
  • the index item can be set according to the name of the target virtual environment (or the name of the application), and the scene recognition apparatus can query the local storage device through the name of the target virtual environment whether the corresponding first correspondence is stored, so that when the local storage device
  • the name of the target virtual environment can be used to quickly query whether the first correspondence is stored in the local storage device.
  • the process of determining the name of the target virtual environment may include: when starting a designated application, the designated application first needs to initialize the target virtual environment corresponding to it, and then the designated application It is necessary to call the initialization interface in OpenGL to run the target virtual environment corresponding to the specified application.
  • the initialization interface can set a function for obtaining the process name of the currently running process, and then according to the preset process name and the name of the target virtual environment To determine the name of the target virtual environment.
  • the specified application can call the egllnitialize interface to initialize the target virtual environment.
  • the egllnitialize interface is provided with a callback function for obtaining the process name of the process (for example, the iGraphicsGameScenseInit function), and then based on the preset process name and the target virtual environment
  • the corresponding relationship of the name of the target virtual environment is determined by comparing the process name of the current process in the corresponding relationship.
  • the corresponding relationship between the name of the process and the name of the target virtual environment may be stored in the form of an xml table.
  • the xml table can be stored as follows:
  • the name of the process stored in the xml table is "com.tent.tmgp.sgame” and the name of the target virtual environment is "AAAA”.
  • the scene recognition apparatus may acquire the first correspondence when the terminal installed with the designated application runs the designated application for the first time.
  • This setting not only enables the first corresponding relationship stored in the terminal to self-adaptively grow, but also saves the memory space of the terminal to the greatest extent.
  • the terminal or the server associated with the target virtual environment can expand and upgrade the corresponding first correspondence relationship, so that the scene recognition device can acquire and realize the recognition of the new scene.
  • the first correspondence relationship may be deleted.
  • the deletion condition may be that the specified application is uninstalled or the user triggers a deletion instruction, etc., which effectively reduces the memory space occupied by the first correspondence in the terminal.
  • Step 103 The scene recognition device determines a corresponding target scene according to the target object model.
  • the scene recognition apparatus can determine the corresponding target scene by querying the correspondence relationship or the like according to the target model identifier (that is, the model identifier of the target object model). For example, the scene recognition device pre-stores the corresponding relationship between the model identifier and the scene, and the target scene can be queried based on the target object model identifier.
  • the target model identifier that is, the model identifier of the target object model.
  • the process of the scene recognition apparatus determining the target scene corresponding to the acquired object model may include: the scene recognition apparatus determines the target scene according to the target object model and the target object model's scene judgment strategy in the target virtual environment. The target scene corresponding to the acquired object model. Then, after the scene recognition apparatus determines the target virtual environment in step D1, the method may further include:
  • Step D3 The scene recognition device obtains the scene judgment strategy in the target virtual environment in the multiple relation sets.
  • the multiple relation sets record the scene judgment strategies in different multiple virtual environments, and each virtual environment can correspond to multiple
  • a scene judgment strategy is a strategy for judging the corresponding scene based on the object model.
  • the multiple sets of relationships may be stored in the local storage device of the terminal, for example, in the flash memory of the terminal (for example, when the terminal is a mobile terminal, the flash memory may be a multimedia card), or the first correspondence
  • the relationship may be stored in a server associated with the target virtual environment, for example, stored in the cloud associated with the target virtual environment.
  • the process of acquiring the scene judgment strategy in the target virtual environment by the scene recognition apparatus in the multiple relation sets may include: querying whether the scene judgment strategy in the target virtual environment is stored in the local storage device, and when the local storage device does not store the scene judgment strategy
  • the scene judgment strategy in the target virtual environment is downloaded from a server associated with the target virtual environment.
  • the downloaded scene judgment strategy in the target virtual environment can be stored in a local storage device.
  • the corresponding scene judgment strategy can be expanded and upgraded in the multi-group relationship set, so that the scene recognition device can obtain and realize the recognition of the new scene.
  • the scene judgment strategy in the multi-group relationship set can be added or modified according to actual needs, and the scene judgment strategy in the multi-group relation set can be a key scene in the designated application.
  • the scene judgment strategy is deleted.
  • the deletion condition may be that the specified application is uninstalled or the user triggers a deletion instruction, etc., which effectively reduces the memory space occupied by the scenario judgment strategy in the terminal.
  • FIG. 4 shows a schematic diagram of the scene recognition apparatus determining the corresponding target scene according to the acquired model identifier.
  • the scene recognition apparatus determines the target scene according to the model identifier obtained in step 102 and the obtained scene judgment strategy in the target virtual environment.
  • the acquired models are identified as "Model 1", “Model 2" and “Model 3”.
  • the acquired scene judgment strategies in the target virtual environment include 4 scene judgment strategies, namely "Scene judgment strategy 1" and "Scene judgment strategy”.
  • the target virtual environment may be a game environment
  • the scene judgment strategy may include one or more of the following: when the target object model includes guns and bullets, the target scene is determined to be a shooting scene; or, when the target object model When at least three characters are included, the target scene is determined to be a team battle scene.
  • the character can be a hero character, etc.; or, when the target object model in the first image frame does not include the target character, and the first image frame When the target object model in the last image frame includes a target character, the target scene is determined to be a scene where the target character appears.
  • the target character may be an enemy character.
  • FIG. 5 shows a schematic diagram of a target scene being a shooting scene.
  • the model identifiers of the objects described by the description parameters are "gun” a1 and "bullet” a2
  • the target scene can be determined to be a shooting scene.
  • Fig. 5 schematically shows the object model of "gun” a1 and the object model of "bullet” a2.
  • FIG. 6 shows a schematic diagram of a target scene as a team battle scene.
  • the model identification of the object described by the determined description parameters can be obtained as hero a3, hero a4, and Hero a5, based on the above scenario judgment strategy, the target scenario can be determined as a team battle scenario.
  • FIG. 6 schematically shows the object model of hero a3, the object model of hero a4, and the object model of hero a5.
  • Corresponding special effects can be added for the target scene, for example, a collective help message can be automatically sent to teammates through the game SDK interface. In this way, the user can automatically recognize the team battle scene without user operation when using the designated application and notify teammates of the team battle information in time, which increases the probability of team battle victory and enhances the user experience.
  • the terminal load is small, and large and complex scenes can be recognized in real time.
  • the scene is a three-dimensional scene at 60 frames per second in a game environment.
  • an interface for querying the current scene recognition result can be set at the API interface layer of the terminal operating system, and the interface for querying the current scene recognition result can be executed by calling the interface for querying the current scene recognition result.
  • Scene recognition method to accurately obtain the current scene. Therefore, when a module for enhancing game special effects is set in a designated application, the module can accurately set special effects for each scene by calling the interface for querying the recognition result of the current scene, which effectively enhances the user experience; when the designated application When a module for placing advertisements is set in the, the module can accurately place advertisements for each scene by calling the interface for querying the recognition result of the current scene.
  • the advertisement can be a third-party advertisement that is not related to the specified application, so targeted advertising can effectively increase the revenue of the advertiser; or the advertisement can be an internal advertisement related to the specified application scenario, such as in a game In the fighting scene in the application, the advertisements placed can be advertisements for selling virtual equipment, which can increase the user's experience of using the game.
  • the first image frame is two image frames. If the "enemy” character in the game environment (that is, when the model indicated by the model identifier is "enemy", for example, the model is identified as “enemy”) appears at a relatively far position in the game scene, and has a smaller display interface
  • the "enemy” role displayed on the terminal is very small, and it is difficult for the user to recognize with the naked eye, and it is difficult for the user to determine the current target scene.
  • FIG. 7 shows a schematic diagram of a target scene where an enemy appears.
  • the left side of FIG. 7 shows an image frame with an enemy character
  • the right side of FIG. 7 shows the image frame.
  • Corresponding special effects can be added for the target scene, such as presenting alarm information, which can be presented in the form of voice, vibration, light signal or logo image, for example, a red flashing alarm appears on the periphery of the screen and lasts for 3 seconds.
  • alarm information can be presented in the form of voice, vibration, light signal or logo image, for example, a red flashing alarm appears on the periphery of the screen and lasts for 3 seconds.
  • the scene judgment strategy can also be applied to identify tiny people or tiny objects in the distance in the scene. This scenario judgment strategy is effectively applicable to terminals with smaller screens, such as mobile terminals, etc., and helps to improve user experience.
  • the scene recognition device in the terminal continuously uses the scene recognition method to perform scene recognition during the running of the application, it will consume the performance of the terminal operating system and occupy the terminal operating system. Therefore, a corresponding switch can be set for the scene recognition method, and the switch can correspond to the enabled state of the scene recognition method. Then, before performing the above step 101, the scene recognition apparatus can also determine the active state of the current scene recognition method, and when the active state of the scene recognition method is enabled, the scene recognition method from step 101 to step 103 is executed again. The activation state can be turned on by the user or automatically turned on by the scene recognition device.
  • the first vertex number of the obtained vertex set can be in one or more known ones. Filter from the vertex set of each object model to obtain at least one candidate vertex set, and then compare the obtained vertex set with the at least one candidate vertex set one by one to obtain the target vertex set with the same content as the obtained vertex set . Since the number of vertices of different object models is generally different, first filtering by the number of vertices can quickly and efficiently eliminate most of the mismatched vertex sets, and then filter through the vertex sets to accurately obtain the determination The object ID of the object described by the description parameter. Further, in step 103 performed on this basis, the process of determining the target scene can also be more efficient, and the determined target scene can also be more accurate.
  • the terminal can use the scene recognition method to perform scene recognition in real time.
  • the object identifier of the target object may correspond to the identifier of at least one target object model.
  • the scene recognition provided in this embodiment of the application Method in the first optional implementation manner, after the scene recognition device determines the target object model of the target object to be drawn according to the description parameters, the target scene can be determined directly based on the target object model; in the second optional implementation manner After determining the target object model of the target object to be drawn according to the description parameters, the scene recognition device first determines the target object based on the target object model (that is, determines the target object identifier), and then determines the target scene based on the target object.
  • the scene recognition method includes:
  • Step 201 The scene recognition apparatus obtains at least one drawing instruction, which is used to draw a target object.
  • step 201 For the specific process of step 201, reference may be made to the above step 101, which will not be repeated in this embodiment of the present application.
  • Step 202 The scene recognition apparatus determines a target object model of the target object to be drawn according to the description parameter of the at least one drawing instruction.
  • step 202 For the specific process of step 202, refer to the foregoing step 102, which is not repeated in the embodiment of the present application.
  • Step 203 The scene recognition device determines the corresponding target object according to the target object model.
  • the scene recognition device may determine the corresponding target object by querying the corresponding relationship or the like according to the target object model, and the corresponding relationship may be stored in a table.
  • the index number of the determined target object model is 26, and Table 1 is consulted, and the object identification of the target object is obtained as "soldier".
  • Step 204 The scene recognition device determines a corresponding target scene according to the target object.
  • the scene recognition device can determine the corresponding target scene by querying the correspondence relationship or the like according to the target object identifier (that is, the object identifier of the target object). For example, the scene recognition device pre-stores the corresponding relationship between the object identifier and the scene, and the target scene can be queried based on the object identifier of the target object.
  • the target object identifier that is, the object identifier of the target object.
  • the process of the scene recognition apparatus determining the target scene corresponding to the acquired target object may include: the scene recognition apparatus determines the target scene according to the target object identifier and the scene judgment strategy of the target object identifier in the target virtual environment. The target scene corresponding to the acquired target object. Then, before step 201, the scene recognition device obtains the scene judgment strategy in the target virtual environment in the multiple relationship sets, and the multiple relationship sets record different scene judgment strategies in multiple virtual environments, and each virtual environment Multiple scene judgment strategies can be corresponded to in, and the scene judgment strategy is a strategy for judging the corresponding scene based on the object identifier.
  • the scene judgment strategy is a strategy for judging the corresponding scene based on the object identifier.
  • the object identifier and the model identifier can be the same identifier, and the process of determining the target object and determining the target object model is basically the same.
  • the scene recognition method obtained by the embodiments of the present application obtains at least one drawing instruction for drawing an object in the first image frame, and then determines that it is used for description based on each drawing instruction in the at least one drawing instruction
  • Each drawing instruction corresponds to the description parameter of the object, and finally the corresponding target scene is determined according to the object model of the object described by the determined description parameter. Since the target scene is determined according to the description parameters determined from the drawing instruction, the terminal does not need to perform additional calculations.
  • the scene recognition provided by the embodiment of this application The method effectively saves system resources.
  • the scene recognition method provided by the embodiments of the present application does not rely on the open application program interface of the terminal manufacturer, nor does it rely on the logic rules of game design, and has platform versatility for operating systems using OpenGL.
  • an embodiment of the present application provides a scene recognition apparatus 200, and the apparatus 200 includes:
  • the first obtaining module 201 is configured to obtain at least one drawing instruction, where the drawing instruction is used to draw a target object;
  • the first determining module 202 is configured to determine a target object model of the target object to be drawn according to the description parameters of the at least one drawing instruction, where the target object model is a rendering model for image rendering, and the description parameter indicates Vertex information of the target object model;
  • the second determining module 203 is configured to determine the corresponding target scene according to the target object model.
  • the target object model is obtained by matching the vertex information with vertex information of one or more known object models.
  • the vertex information includes a vertex set, and a vertex set includes coordinate values of one or more vertices, as shown in FIG. 10, which shows a first determining module provided by an embodiment of the present application
  • a schematic structural diagram of 202, the first determining module 202 includes:
  • the first obtaining submodule 2031 is configured to obtain m sets of vertices from the description parameters, where m is a positive integer;
  • the first determining submodule 2032 is configured to determine a target vertex set that matches the vertex set of one or more known object models from the m vertex sets;
  • the second determining sub-module 2033 is configured to determine the object model corresponding to the target vertex set as the target object model.
  • the first determining submodule 2032 is configured to determine a vertex set with the same content and the same number of vertices as the vertex set of one or more known object models among the m vertex sets as The target vertex set.
  • the first obtaining submodule 2031 is configured to:
  • Part of the set of vertices in the description parameter is screened to obtain the set of m vertices.
  • the vertex information includes the number of vertices
  • the target object model is an object model in which the number of vertices in the known object model is equal to the number of vertices indicated by the description parameter.
  • the vertex information includes the number of vertices
  • the target object model is an object model in which the number of vertices in the known object model is equal to the number of vertices indicated by the description parameter.
  • the first obtaining module 201 is configured to obtain the at least one drawing instruction by monitoring an OpenGL interface, where the OpenGL interface is an interface between OpenGL and an application program.
  • an embodiment of the present application provides a scene recognition apparatus 200, and the apparatus 200 further includes:
  • the third determining module 204 is configured to determine a target virtual environment before determining the target object model of the target object to be drawn according to the description parameters of the at least one drawing instruction, and the target virtual environment is the virtual environment to which the first image frame belongs.
  • the first image frame is an image frame to which a target object to be drawn by the at least one drawing instruction belongs;
  • the third acquiring module 205 is configured to acquire the first corresponding relationship between the description parameter and the object model in the target virtual environment
  • the first determining module 202 is configured to query the first correspondence relationship and obtain the target object model corresponding to the description parameter.
  • the apparatus 200 further includes:
  • the fourth acquisition module 206 is configured to, after the determination of the target virtual environment, acquire a scene judgment strategy in the target virtual environment from a set of multiple relationships, and the multiple relationship sets record different virtual environments.
  • a scene judgment strategy in an environment where the scene judgment strategy is a strategy for judging a corresponding scene based on an object model;
  • the second determining module 203 is configured to:
  • the target scene is determined according to the target object model and the scene judgment strategy of the target object model in the target virtual environment.
  • the target virtual environment is a game environment
  • the scene judgment strategy includes one or more of the following:
  • the target object model includes guns and bullets, determining that the target scene is a shooting scene;
  • the target object model includes at least three characters, determining that the target scene is a team battle scene; or
  • the target object model in the first image frame includes a target character
  • the target object model in the previous image frame of the first image frame does not include the target character
  • the third obtaining module 205 is configured to:
  • the scene recognition device acquires at least one drawing instruction for drawing a target object, and then determines the target object model of the target object to be drawn according to the description parameters of the at least one drawing instruction, and finally Determine the corresponding target scene according to the target object model. Since the target scene is determined according to the target object model, and the target object model is determined according to the description parameters of the drawing instruction, the terminal does not need to perform additional calculations, which consumes a lot of computing resources compared to the image recognition technology used in related technologies , The scene recognition method provided by the embodiment of the present application effectively saves system resources.
  • each module in the above device can be implemented by software or hardware or a combination of software and hardware.
  • the hardware may be a logic integrated circuit module, which may specifically include transistors, logic gate arrays, or arithmetic logic circuits.
  • the software exists in the form of a computer program product and is stored in a computer-readable storage medium. The software can be executed by a processor. Therefore, alternatively, the scene recognition device may be implemented by a processor executing a software program, which is not limited in this embodiment.
  • the terminal in Figure 13 is divided into a software part and a hardware part.
  • the software part includes: a cache, a system API interface layer, a scene recognition module, and a system database.
  • the cache has an interface for querying cached drawing instructions , That is, the cache provides an interface for the scene recognition module to query the drawing instructions of the cache of the image frame corresponding to the current scene;
  • the system API interface layer includes an extended interface, which includes a scene query interface, and other modules can query by calling this interface
  • the system database includes a model database, which stores the description parameters of the object models corresponding to common objects separately according to different applications;
  • the scene recognition module analyzes the drawing instructions obtained by the model, and the model The models in the database are compared and matched and judged to identify the current scene.
  • the scene recognition module can realize the functions of the aforementioned scene recognition device.
  • the hardware part includes memory and EMMC, and the model database file exists in the EMMC.
  • the terminal is divided into a software part and a hardware part in Figure 14.
  • the software part includes : Cache, system API interface layer, scene recognition module, system database.
  • the system API interface layer includes the Anroid extension interface, which includes a scene query interface.
  • a 4D game special effect enhancement module can be set in a smart phone.
  • the 4D game special effect enhancement module can call the scene query interface to query the current scene, and add 4D special effects such as vibration or special sound effects according to the scene;
  • the structure and function of the cache, system database, and scene recognition module can refer to the aforementioned Figure 13.
  • the structure and function of the hardware part can refer to the aforementioned Figure 13.
  • An embodiment of the present application further provides a scene recognition device, including a processor and a memory; when the processor executes a computer program stored in the memory, the scene recognition device executes the scene recognition method provided in the embodiment of the present application.
  • the scene recognition device can be deployed in an electronic imaging device.
  • the exemplary embodiment of the present application further provides a terminal, which may include a processor and a memory for storing a computer program that can run on the processor, and when the processor executes the computer program, it is used to implement the foregoing implementation of the present application.
  • a terminal which may include a processor and a memory for storing a computer program that can run on the processor, and when the processor executes the computer program, it is used to implement the foregoing implementation of the present application.
  • the processor is configured to: obtain at least one drawing instruction, the drawing instruction being an instruction for drawing an object in the first image frame; based on each drawing instruction in the at least one drawing instruction, determining to describe the Describe the description parameter of the object corresponding to each drawing instruction; obtain the object ID of the object described by the determined description parameter; determine the target scene corresponding to the obtained object ID.
  • FIG. 15 shows a schematic structural diagram of a terminal 300 involved in an exemplary embodiment of the present application.
  • the terminal 300 may include a processor 302 and a signal interface 304.
  • the processor 302 includes one or more processing cores.
  • the processor 302 executes various functional applications and data processing by running software programs and modules.
  • the processor 302 may include a CPU and a GPU, and may further optionally include hardware accelerators required to perform operations, such as various logic operation circuits.
  • the device 300 may further include the transceiver (not shown in the figure).
  • the transceiver specifically performs signal transmission and reception.
  • the processor 302 needs to perform a signal transceiving operation, it can call or drive the transceiver to perform the corresponding transceiving operation. Therefore, when the device 300 performs signal transmission and reception, the processor 302 is used to determine or initiate a transmission and reception operation, which is equivalent to the initiator, and the transceiver is used for specific transmission and reception execution, which is equivalent to the executor.
  • the transceiver may also be a transceiver circuit, a radio frequency circuit or a radio frequency unit, which is not limited in this embodiment.
  • the terminal 300 further includes components such as a memory 306 and a bus 308.
  • the memory 306 and the signal interface 304 are respectively connected to the processor 302 through a bus 308.
  • the memory 306 can be used to store software programs and modules. Specifically, the memory 306 may store a program module 3062 required by at least one function.
  • the memory may be random access memory (Random Access Memory, RAM) or DDR.
  • the program can be an application or a driver.
  • program module 3062 may include:
  • the first acquiring unit 30621 has the same or similar function as the first acquiring module 201.
  • the first determining unit 30622 has the same or similar function as the first determining module 202.
  • the second determining unit 30623 has the same or similar function as the second determining module 203.
  • the embodiment of the present application also provides a storage medium, the storage medium may be a non-volatile computer-readable storage medium, and a computer program is stored in the storage medium.
  • the computer program instructs the terminal to execute any one provided in the embodiments of the present application. Scene recognition method.
  • the storage medium may include: read-only memory (ROM) or random access memory (RAM), magnetic disks or optical disks, and other media that can store program codes.
  • the embodiment of the present application also provides a computer program product containing instructions.
  • the computer program product runs on a computer, the computer executes the scene recognition method provided by the embodiment of the present application.
  • the computer program product may include one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions can be stored in a computer-readable storage medium or transmitted through the computer-readable storage medium.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
  • the scene recognition device provided in the above embodiment performs scene recognition
  • only the division of the above functional modules is used as an example for illustration.
  • the above functions can be allocated by different functional modules as needed. That is, the internal structure of the terminal is divided into different functional modules to complete all or part of the functions described above.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium mentioned can be a read-only memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种场景识别方法及装置、终端、存储介质等,可以应用于游戏场景的识别。所述方法包括:获取一条或多条绘制指令,根据该一条或多条绘制指令中携带的描述参数确定待绘制的目标物体,由于场景和物体之间存在某些特定关系,所以可以根据该目标物体确定对应的目标场景。相较于相关技术中采用的图像识别技术所消耗大量的计算资源,本方法能够有效节约***资源。

Description

场景识别方法及装置、终端、存储介质
本申请要求于2019年02月01日提交的申请号为201910105807.4、发明名称为“场景识别方法及装置、终端、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机图形技术领域,特别涉及一种场景识别方法及装置、终端、存储介质。
背景技术
随着科技的发展,终端的处理器的性能越来越强大,可以支持并运行多种大型竞技游戏。终端的生产厂商为了进一步提升用户体验,将终端设置为在运行竞技游戏时,可以识别出竞技游戏的当前图像帧的游戏场景,并针对不同的游戏场景增加不同的特效等。例如,移动终端可以在识别出当前游戏场景之后,针对不同的游戏场景增加声音或者触觉等特效。
相关技术中提供了一种场景识别方法,该方法基于图像识别技术,预先根据游戏场景中的多幅图像帧的游戏画面建立训练集,利用图像识别技术将当前图像帧的游戏画面和训练集中的多幅图像帧的游戏画面进行匹配,以在训练集中获取与当前图像帧的游戏画面匹配的某一图像帧的游戏画面,将该匹配的某一图像帧的游戏画面所对应的游戏场景作为当前图像帧的游戏画面对应的游戏场景。
但是,相关技术所提供的场景识别方法中,图像识别技术的使用过程需要消耗大量的计算资源,导致终端的运算代价较大。
发明内容
本申请提供了一种场景识别方法及装置、终端、存储介质等,可以节约***的运算资源。
第一方面,本申请示例性实施例提供一种场景识别方法,该方法包括:获取至少一条绘制指令,该绘制指令用于绘制目标物体;根据该至少一条绘制指令的描述参数确定待绘制的目标物体的目标物体模型,该目标物体模型为用于图像绘制的渲染模型,该描述参数指示目标物体模型的顶点信息;根据该目标物体模型确定对应的目标场景。
本申请实施例提供的场景识别方法,根据绘制物体的绘制指令的描述参数就可以确定出目标场景,相较于相关技术中采用的图像识别技术所消耗的大量的计算资源,本申请实施例所提供场景识别方法有效节约了***资源。
在一些实现方式中,根据该目标物体模型确定对应的目标场景可以是根据目标物体模型直接确定对应的目标场景;在另一些实现方式中,根据该目标物体模型确定对应的目标场景可以是根据该目标物体模型确定对应的物体,然后根据物体确定对应的目标场景。如果计算机中存储了物体模型与场景的对应关系,可以采用前一种,如果计算机中存储了物体(比如某个游戏角色的标识)和场景的对应关系,那么可以采用后一种。或者,根据计算机中存储的其它类型的对应关系,本申请提供的方法可以做相应的改变,均不应视做脱离本申请的范 围。
本申请中的“至少一条绘制指令”可以理解为“一条或多条指令”。
在一些实现方式中,一条绘制指令可以绘制一个物体,多条绘制指令也可以绘制一个物体,一条绘制指令还可以绘制多个物体,也即是绘制指令与物体可以是一对一、一对多或者多对一的关系。通常情况下,一条绘制指令用于绘制一个物体。本申请中提出的“物体”可以是人,也可以是动物;可以是静态的物体,也可以是动态的物体;可以存在于背景中,也可以存在于前景中,等等,本申请对此不做限定。
在一些实现方式中,目标物体模型为将顶点信息与已知的一个或多个物体模型的顶点信息匹配所得。已知的一个或多个物体模型的顶点信息可以以顶点信息和物体模型的对应关系的形式存在于计算机***中。该对应关系可以是人工输入的,也可以是通过机器学习(相当于人工智能)等方式获得的。
在本申请中,由于描述参数中携带的顶点信息不同,所以本申请提供了至少两种确定目标物体模型的实现方式。
在第一种实现方式中,顶点信息包括物体模型的顶点集合,一个顶点集合中包括一个或多个顶点的坐标值,则确定目标物体模型的过程可以包括:从描述参数中获取m个顶点集合,m为正整数;从该m个顶点集合中,确定与已知的一个或多个物体模型的顶点集合匹配的目标顶点集合;将该目标顶点集合所对应的物体模型确定为目标物体模型。
一个顶点集合包含的顶点的数量称之为“顶点个数”。
基于该第一种实现方式,在一些实现方式中,从该m个顶点集合中,确定与已知的一个或多个物体模型的顶点集合匹配的目标顶点集合的过程可以包括:确定该m个顶点集合中,与已知的一个或多个物体模型的顶点集合内容相同且包含的顶点个数相等的顶点集合为目标顶点集合。即“匹配”意味着两个顶点集合中的顶点数量相同,且两个顶点集合内部的顶点的坐标也是相同的。如果顶点集合中的顶点是存在顺序的,那么两个顶点集合中的顶点应该是按照顺序一一对应且坐标值相同的。
基于该第一种实现方式,在一些实现方式中,先根据顶点集合的顶点个数,再根据顶点集合的内容来确定匹配的目标顶点集合。由于不同的物体模型所具有的顶点个数通常是不相同的,因此,先通过顶点数快速筛选,再通过顶点集合的内容精准筛选,不但提高了确定目标物体模型的效率,也保证了确定的目标物体模型的准确性。
在一些实现方式中,从描述参数中获取m个顶点集合的过程,包括:获取描述参数中包含的所有顶点集合,将该所有顶点集合作为该m组顶点集合;或者,筛选描述参数中的部分顶点集合以得到该m组顶点集合。
在第二种实现方式中,顶点信息包括物体模型的顶点个数,那么目标物体模型为已知的一个或多个物体模型中顶点个数与描述参数指示的顶点个数相等的物体模型。
需要说明的是,描述参数是用来“指示”顶点信息的,所以在这种实现方式中描述参数是用来“指示”顶点个数的。因此描述参数可以直接携带每个顶点集合的顶点个数,也可以不携带顶点个数,仅携带顶点集合,因为有了顶点集合,计算机***就可以计算获得该集合中的顶点个数。
基于该第二中实现方式,在一些实现方式中,当顶点信息包括顶点个数时,该确定目标物体模型的过程可以包括:从所述至少一条绘制指令的描述参数中获取n个顶点个数,n为 正整数;分别将该n个顶点个数中的每个顶点个数作为第一顶点个数,在已知的一个或多个物体模型中,确定至少一个备选物体模型,该至少一个备选物体模型中的每个备选物体模型均具有第一顶点个数,已知的一个或多个物体模型中记录了已知的多个物体模型对应的多个顶点个数;当该备选物体模型的个数为1时,将该备选物体模型确定为该第一顶点个数对应的目标物体模型;当该备选物体模型的个数大于1时,在该备选物体模型中选择该第一顶点个数对应的目标物体模型。
仅通过顶点个数筛选确定目标物体模型在一些场景中是能够提高匹配效率的。
在一些实现方式中,该目标物体模型还需要满足如下条件:该目标物体模型与目标虚拟环境相匹配,其中,该目标虚拟环境与至少一条绘制指令所要绘制的目标物体所属的图像帧对应,目标虚拟环境为生成前述至少一条绘制指令的应用程序所创建的虚拟环境,是该应用程序在终端上生成的可交互的二维或三维环境。其中,确定目标物体模型是否与目标虚拟环境相匹配的过程可以包括:确定目标虚拟环境;在备选物体模型中选择与该目标虚拟环境中的物体模型相似度最高的物体模型作为目标物体模型。
由于叠加目标虚拟环境匹配这个因素,保证了根据该目标物体模型的准确性。
在一些实现方式中,获取至少一条绘制指令的过程,包括:通过监听OpenGL接口的方式获取该至少一条绘制指令,其中,OpenGL接口为OpenGL和应用程序之间的接口。
在一些实现方式中,该通过监听OpenGL接口的方式获取该至少一条绘制指令的过程可以包括:执行至少一次监听获取流程,得到所述至少一条绘制指令。
在一些实现方式中,所述监听获取流程包括:监听第一OpenGL接口,所述第一OpenGL接口为OpenGL和指定应用程序之间的接口,所述指定应用程序用于绘制目标虚拟环境中图像帧中的物体;在所述第一OpenGL接口中,监听到用于指示一个图像帧绘制完成的结束指示信息后,通过第二OpenGL接口在OpenGL的缓存中获取绘制完成的所述一个图像帧的绘制指令,所述第二OpenGL接口连接OpenGL。
通过监听第一OpenGL接口中用于指示一个图像帧绘制完成的结束指示信息,可以准确获知获取绘制完成的所述一个图像帧的绘制指令的时间节点,根据该时间节点来获取至少一条绘制指令,提高了该场景识别方法的有效性。
在一些实现方式中,在根据至少一条绘制指令的描述参数确定目标物体模型之前,该方法还包括:确定目标虚拟环境,该目标虚拟环境与至少一条绘制指令所要绘制的目标物体所属的图像帧对应;获取在该目标虚拟环境下,描述参数与物体模型的第一对应关系,该第一对应关系可以包括描述参数与物体模型的对应关系,也可以包括描述参数、物体模型以及物体的对应关系,也可以包括描述参数以及物体的对应关系,当然,由于物体与物体模型对应,因此该第一对应关系也可以包括描述参数与物体模型的对应关系。相应的,根据该至少一条绘制指令的描述参数确定目标物体模型的过程可以包括:查询该第一对应关系,获得与该描述参数对应的目标物体模型。
在一些实现方式中,在确定目标虚拟环境之后,该方法还包括:在多组关系集合中,获取在目标虚拟环境下的场景判断策略,该多组关系集合记录有不同的多个虚拟环境下的场景判断策略,该场景判断策略为基于物体模型判断对应场景的策略,或者,该场景判断策略为基于物体标识判断对应场景的策略。若场景判断策略为基于物体模型判断对应场景的策略, 则根据目标物体模型确定对应的目标场景,包括:根据目标物体模型和目标虚拟环境下该目标物体模型的场景判断策略,确定目标场景。若场景为基于物体模型对应的物体标识判断对应场景的策略,则根据目标物体模型确定对应的目标场景,包括:根据目标物体模型对应的目标物体标识和目标虚拟环境下该目标物体标识的场景判断策略,确定目标场景。
根据确定的目标虚拟环境,获取描述参数与物体模型的第一对应关系以及该目标虚拟环境下的场景判断策略,使得获取到的第一对应关系和场景判断策略能够匹配该目标虚拟环境,避免了获取到不匹配的第一对应关系和场景判断策略而占用终端的内存空间的情况。
在一些实现方式中,该目标虚拟环境为游戏环境,目标场景判断的时候基于目标物体模型是否包含***和子弹、是否包含多个角色、或是否首次出现某个角色等方式。具体的,场景判断策略包括以下中的一项或多项:当该目标物体模型包括***和子弹分别对应的物体模型(或者当目标物体包括***和子弹时)时,确定该目标场景为开枪场景;或,当该目标物体模型包括至少三个角色分别对应的物体模型时,确定该目标场景为团战场景;或,当第一图像帧中的目标物体模型包括目标角色对应的物体模型时,且该第一图像帧的上一幅图像帧中的目标物体模型不包括该目标角色对应的物体模型时,确定该目标场景为目标角色出现的场景。这里的“至少三个角色”可以是不同类型的角色,也可以是同类型的角色。
在一些实现方式中,该第一图像帧包括连续的两个图像帧。
在一些实现方式中,获取在目标虚拟环境下,描述参数与物体模型的第一对应关系,包括:查询本地存储设备中存储的所述第一对应关系;或者从与所述目标虚拟环境关联的服务器下载所述第一对应关系。
本申请实施例提供的场景识别方法,通过获取至少一条用于绘制目标物体的绘制指令,再根据该至少一条绘制指令的描述参数确定待绘制的目标物体的目标物体模型,最后根据该目标物体模型确定所对应的目标场景。由于目标场景是根据目标物体模型确定,而该目标物体模型是根据绘制指令的描述参数确定的,相较于相关技术中采用的图像识别技术所消耗的大量的计算资源,本申请实施例所提供场景识别方法有效节约了***资源。
第二方面,本申请示例性实施例提供一种场景识别装置,该装置包括一个或多个模块,该一个或多个模块用于实现上述第一方面中任一所述的场景识别方法。
本申请实施例提供的场景识别装置,通过获取至少一条用于绘制目标物体的绘制指令,再根据该至少一条绘制指令的描述参数确定待绘制的目标物体的目标物体模型,最后根据该目标物体模型确定所对应的目标场景。由于目标场景是根据目标物体模型确定,而该目标物体模型是根据绘制指令的描述参数确定的,终端无需进行额外的运算,相较于相关技术中采用的图像识别技术所消耗的大量的计算资源,本申请实施例所提供场景识别方法有效节约了***资源。
第三方面,本申请实施例提供了一种计算机,例如终端设备,包括处理器和存储器;在处理器执行存储器存储的计算机程序以实现第一方面中任一所述的场景识别方法。
第四方面,本申请实施例提供了一种存储介质,该存储介质可以是非易失性的,该存储介质内存储有计算机程序,计算机程序用于实现第一方面中任一所述的场景识别方法。
第五方面,本申请实施例提供了一种包含指令的计算机程序产品(或称计算机程序),当计算机程序产品在计算机上运行时,使得计算机执行第一方面中任一所述的场景识别方法。
本申请提供的场景识别方法及装置、终端、存储介质,通过获取至少一条用于绘制目标物体的绘制指令,再根据该至少一条绘制指令的描述参数确定待绘制的目标物体的目标物体模型,最后根据该目标物体模型确定所对应的目标场景。由于目标场景是根据目标物体模型确定,而该目标物体模型是根据绘制指令的描述参数确定的,相较于相关技术中采用的图像识别技术所消耗的大量的计算资源,终端无需进行很多额外的运算,本申请实施例所提供场景识别方法有效节约了***资源。
本申请在上述各方面提供的实现方式的基础上,还可以进行进一步组合以提供更多实现方式。
附图说明
图1是本申请实施例提供的场景识别方法所涉及的应用环境的示意图;
图2是本申请实施例提供的一种场景识别方法的流程图;
图3是本申请实施例提供的一种确定目标物体模型的过程的流程图;
图4是本申请实施例提供的一种根据获取的模型标识确定对应的目标场景的示意图;
图5是本申请实施例提供的一种目标场景为开枪场景的示意图;
图6是本申请实施例提供的一种目标场景为团战场景的示意图;
图7是本申请实施例提供的一种目标场景为敌人出现的场景的示意图;
图8是本申请实施例提供的一种场景识别方法的流程图;
图9是本申请实施例提供的一种场景识别装置的结构示意图;
图10是本申请实施例提供的一种第二获取模块的结构示意图;
图11是本申请实施例提供的另一种场景识别装置的结构示意图;
图12是本申请实施例提供的另一种场景识别装置的结构示意图;
图13是本申请实施例提供的场景识别装置在终端的软件和硬件中的部署形态的示意图;
图14是本申请实施例提供的场景识别装置在终端的软件和硬件中的部署形态的示意图;
图15是本申请实施例提供的一种终端的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
随着终端的处理器的性能越来越强大,终端可以支持并运行多种大型竞技游戏。当终端在运行竞技游戏时,可以自动识别出该竞技游戏的当前图像帧的游戏场景,并针对不同的游戏场景增加不同的内容项目。例如,当该终端为移动终端时,该内容项目可以包括特效或者广告等,移动终端在识别出当前游戏场景之后,可以针对不同的游戏场景增加声音或者触觉等特效,或者针对不同的游戏场景投放不同的广告。因此,有效判断出当前图像帧的游戏场景,对于提升用户体验以及提高广告投放商的收益都至关重要。
相关技术中提供了两种场景识别方法,第一种是基于图像识别技术,将当前图像帧的游戏画面和预先建立的训练集中的多幅图像帧的游戏画面进行匹配,当当前图像帧与某一图像帧匹配时,将该某一图像帧的游戏画面所对应的游戏场景作为当前图像帧的游戏画面对应的游戏场景。第二种是通过游戏的软件开发工具包(Software Development Kit,SDK)中开放 的部分应用程序接口查询有限个数的游戏场景。
但是,对于第一种场景识别方法,图像识别技术的使用过程需要消耗大量的计算资源,导致终端的运算代价较大,并且,若需要确定多帧游戏画面对应的游戏场景或者确定大型3D游戏中复杂的游戏场景,均会严重影响终端的使用性能,例如,终端出现卡顿或者耗电量过大的情况。对于第二种场景识别方法,由于游戏公司往往只对部分终端生产厂商开放应用程序接口,导致该场景识别方法通用性较低,且所能够查询的游戏场景的数量也有限,导致第二种场景识别方法具有较大的局限性。
为了有助于读者理解,在对本申请实施例进行详细介绍之前,在此先对本申请实施例所涉及的名词进行解释:
OpenGL(Open Graphics Library,开放式图形库),其定义了一套跨编程语言、跨平台的应用程序编程接口(Application Programming Interface,API),其中包含许多对图形进行处理的函数,例如,OpenGL定义的API中包括用于绘制二维图像或三维图像的接口(该接口包括绘制函数,例如绘制函数glDrawElements()),还包括将绘制函数绘制的图像呈现到显示界面上的接口(该接口包括呈现函数,例如函数eglSwapBuffers())等,本申请实施例在此不进行一一举例。其中,OpenGL中的函数可以通过指令调用,例如,通过绘制指令可以调用绘制函数,以绘制二维图像或三维图像。在本申请实施例中,将二维图像或者三维图像均统称为图像帧。需要说明的是,在通过OpenGL进行物体绘制时,一条绘制指令可以绘制一个物体,多条绘制指令也可以绘制一个物体,一条绘制指令还可以绘制多个物体,也即是绘制指令与物体可以是一对一、一对多或者多对一的关系。通常情况下,一条绘制指令用于绘制一个物体。
OpenGL ES(OpenGL for Embedded Systems,嵌入式***的开放式图形库),是OpenGL中针对手机、掌上电脑(Personal Digital Assistant,PDA)和游戏主机等嵌入式设备而设计的图形库。
虚拟环境,是应用程序利用计算机图形***和各种接口,在终端上生成的可交互的二维或三维环境。
场景,是虚拟环境中的各个场面,每个场景由一个或多个虚拟物体搭建而成。
示例的,应用程序为游戏应用程序,对应的虚拟环境为游戏环境,该游戏环境中存在多个游戏场景,例如团战场景或枪战场景。
本申请实施例提供了一种场景识别方法,可以解决相关技术中所存在的问题。请参考图1,图1是该场景识别方法所涉及的应用环境的示意图,该场景识别方法由场景识别装置执行,该场景识别装置位于终端中,该场景识别装置可以通过安装于终端中的软件实现,在终端中运行有多种应用程序,每个应用程序的数据依次经过OpenGL接口层、缓存和OpenGL应用态驱动程序,其中,OpenGL接口层用于提供OpenGL中的函数,以供应用程序调用,缓存用于存储应用程序调用OpenGL接口层中函数的指令,OpenGL应用态驱动程序用于将应用程序调用的指令传递至GPU,以供GPU执行该指令,本申请实施例提供的场景识别装置与缓存之间设置有接口,用于通过该缓存中的数据来进行场景识别。
本申请实施例在实际应用中,应用程序可以通过执行绘制指令从OpenGL接口层调用上述OpenGL或者OpenGL ES中的绘制函数,以实现该应用程序中绘制二维图像或三维图像的 相关功能,例如该应用程序为游戏应用程序时,该游戏应用程序可以通过执行绘制指令来调用绘制函数以绘制该游戏中用于呈现游戏场景的二维图像或三维图像。
为了便于说明,本申请实施例假设当前开启的应用程序为指定应用程序,请参考图2,该方法包括:
步骤101、场景识别装置获取至少一条绘制指令,该绘制指令用于绘制目标物体。
其中,假设第一图像帧为当前需要绘制的图像帧,也即是该至少一条绘制指令所要绘制的目标物体所属的图像帧,其与当前场景对应,第一图像帧可以包括一个图像帧,或连续的至少两个图像帧,例如,第一图像帧包括连续的两个图像帧。
场景识别装置在每次进行场景识别时,可以获取一条绘制指令,但是,为了保证场景识别结果的准确性,场景识别装置可以在每次进行场景识别时,获取至少两条绘制指令。可选的,场景识别装置可以按照指定顺序获取该至少两条绘制指令,按照该指定顺序排列的该至少两条绘制指令也称为绘制指令流。该指定顺序为应用程序调用绘制指令的顺序。
终端中的指定应用程序通过调用多条绘制指令,可以绘制第一图像帧,每条绘制指令可以绘制该第一图像帧中的一个物体,或者,每条绘制指令可以绘制该第一图像帧中的多个物体,或者多条绘制指令可以绘制该第一图像帧中的一个物体。
本申请实施例在实际实现时,对于每一个图像帧,指定应用程序将该图像帧所对应的多条绘制指令从图形库中读取到内存之后,在当前显示界面上呈现出该图像帧,那么,场景识别装置可以从内存中读取到该图像帧所对应的多条绘制指令,并从中获取至少一条绘制指令用于场景识别。若第一图像帧为连续的至少两个图像帧时,场景识别装置可以从内存中分别读取到该至少两个图像帧中每一个图像帧所对应的多条绘制指令,并从中获取至少一条绘制指令用于场景识别。
其中,场景识别装置通过监听OpenGL接口的方式获取该至少一条绘制指令,该OpenGL接口为OpenGL和应用程序之间的接口。
示例的,该通过监听OpenGL接口的方式获取该至少一条绘制指令的过程可以包括:场景识别装置执行至少一次监听获取流程,得到该至少一条绘制指令。其中,该监听获取流程可以包括:
步骤A1、监听第一OpenGL接口,该第一OpenGL接口为OpenGL和指定应用程序之间的接口,该指定应用程序用于绘制目标虚拟环境中图像帧中的物体。
其中,该目标虚拟环境与至少一条绘制指令所要绘制的目标物体所属的图像帧对应,目标虚拟环境为生成前述至少一条绘制指令的应用程序(即上文所述的指定应用程序)所创建的虚拟环境,是该应用程序在终端上生成的可交互的二维或三维环境,用户可以沉浸在该环境中。
该第一OpenGL接口为上述OpenGL所定义的程序编程接口中的指定接口,可以在该指定接口中添加一函数,该函数可以用于发出结束指示信息,在一个图像帧绘制完成时,通过调用该指定接口来自动触发该函数发出指示该图像帧绘制完成的结束指示信息。
步骤A2、在该第一OpenGL接口中,监听到用于指示一个图像帧绘制完成的结束指示信息后,通过第二OpenGL接口在OpenGL的缓存中获取绘制完成的该一个图像帧的绘制指令,第二OpenGL接口连接OpenGL。
对于每个图像帧的绘制指令,在该图像帧没有被完整绘制之前,用于绘制该图像帧的绘 制指令均可以存储于OpenGL的缓存中,该OpenGL的缓存可以对应内存中一块存储区域(也称存储空间),该内存可以由随机存取存储器(Random Access Memory,RAM)实现。该第二OpenGL接口为上述OpenGL所定义的程序编程接口中的指定接口,场景识别装置可以通过第二OpenGL接口在该缓存中获取用于绘制该图像帧的绘制指令。
通常情况下,应用程序可以调用OpenGL中的绘制函数来绘制图像,调用该绘制函数的绘制指令会缓存在内存中,在一定条件下,内存会将这些绘制指令通过CPU发送至显卡内存(简称显存,也称为帧缓存,其用于存储显卡芯片处理过或者即将提取的渲染数据),应用程序在GPU的控制下,根据发送至显卡内存的绘制指令绘制图像帧,并通过调用OpenGL中的显示接口将绘制完成的图像帧显示在显示界面上。
由于指定应用程序调用OpenGL中的显示函数时,该第一图像帧已经绘制完成,因此,本申请实施例可以选择将OpenGL中的显示接口作为前述第一OpenGL接口,在该显示接口中添加调用函数。
示例的,该显示接口可以为eglSwapBuffer接口,在一个图像帧绘制完成时,指定应用程序可以通过调用eglSwapBuffer接口将绘制完成的图像帧呈现在显示界面上,那么,在该eglSwapBuffer接口中增加用于发出结束指示信息的函数。当指定应用程序调用eglSwapBuffer接口时,触发该函数发出结束指示信息。则场景识别装置可以在该eglSwapBuffer接口中,监听到用于指示一个图像帧绘制完成的结束指示信息后,通过第二OpenGL接口在OpenGL的缓存中获取绘制完成的该一个图像帧的绘制指令。
当然,在执行步骤A1之前,场景识别装置需要在场景识别装置的注册列表中注册监听该第一OpenGL接口的被调用的事件;在执行完步骤A2之后,场景识别装置需要在场景识别装置的注册列表中取消注册监听该第一OpenGL接口的被调用的事件。
步骤102、场景识别装置根据该至少一条绘制指令的描述参数确定待绘制的目标物体的目标物体模型。
可选的,如图3所示,步骤102可以包括:
步骤1021、场景识别装置确定该至少一条绘制指令的描述参数。
应用程序在通过绘制指令调用OpenGL中的绘制函数时,绘制指令中会包含该绘制指令对应物体的相关参数,该相关参数包括用于描述物体的描述参数,场景识别装置可以从该至少一条绘制指令中每条绘制指令中提取描述参数,也就是说,场景识别装置可以基于该至少一条绘制指令中每条绘制指令确定出至少一个描述参数。
需要说明的是,用于描述物体的描述参数可以有多种,例如可以通过颜色以及形状等参数来对物体进行描述。而在计算机图形技术领域,物体可以通过物体模型进行表征,因此该描述参数可以是用于描述物体模型的相关参数,该物体模型为对物体所建立的用于图像渲染的渲染模型。其中,物体模型由物体表面的多个顶点(该多个顶点也称为离散点阵或者模型点云等)组成,因此,该描述参数是能够指示物体模型的顶点信息的参数。
通常情况下,不同的物体模型可以具有不同的顶点个数,因此,根据物体模型的顶点个数可以确定出该顶点个数所对应的物体模型;另外,每个顶点均具有不同的属性信息,该属性信息可以包括每个顶点的位置坐标值以及纹理数据等信息,不同的物体模型所具有的多个顶点的属性信息不尽相同,因此,根据物体模型所具有的顶点的属性信息可以确定出该物体类型。顶点信息可以包括物体模型的顶点个数,以及每个顶点的属性信息。
示例的,场景识别装置可以从一条绘制物体模型的绘制指令“glDrawElements(mode=GL_TRIANGLES,count=4608,type=GL_UNDIGNED_SHOT,indices=NULL)”中确定出至少一个描述参数。在该绘制指令中,count所对应的参数值为4608,由于每三个参数值用来表示一个顶点,也就是说,参数值与顶点个数是3:1的关系,那么,参数值为4608所对应的物体模型的顶点个数为4608/3,即1536。因此,场景识别装置可以从该绘制指令中确定一个描述参数,即该物体模型的顶点个数。当然,不同的绘制指令所对应的参数可以不同,因此从该参数中获取描述参数的方法可以不同,本申请实施例在此不一一举例。
步骤1022、场景识别装置根据该描述参数确定待绘制的目标物体的目标物体模型。
其中,该目标物体模型为对目标物体所建立的用于图像绘制的渲染模型。如前所述,一条绘制指令可以对应一个描述参数,也可以对应多个描述参数,每个描述参数可以对应一个目标物体模型,则相应的,场景识别模块根据每条绘制指令的描述参数可以确定一个或多个待绘制的目标物体的目标物体模型。
本申请实施例在实际实现时,物体可以对应有物体标识,该物体标识可以用于在指定应用程序中唯一标识一个物体,例如,物体标识可以包括物体的名称、物体的姿态、物体的角色名称或者道具名称等。物体模型可以对应有模型标识,模型标识可以用于在指定应用程序中唯一标识一个物体模型。物体与物体模型对应,因此,物体标识与物体模型以及模型标识对应。
需要说明的是,对于同一个物体来说,由于该物体可以具有不同的形态以及可以具有不同的绘制角度,因此,同一个物体可以具有多个物体模型。相应的一个物体标识可以对应多个模型标识。当然,通常,一个物体标识也可以对应一个模型标识。
模型标识和物体标识均可以以字符串的形式表示,该字符串包括数字、字母和/或文字,例如物体标识可以为物体的名称或编码。当物体与物体模型存在一一对应关系时,物体标识和模型标识可以为相同的标识,也即是两个标识复用,当然,物体标识和模型标识也可以为不同的标识;当物体与物体模型存在一对多的关系时,物体标识和模型标识也可以为不同的标识,例如模型标识为索引标号,物体标识为物体名称。
可选的,目标物体模型为将顶点信息与已知的一个或多个物体模型的顶点信息匹配所得。该已知的一个或多个物体模型可以与每个物体模型对应的模型标识(例如索引标号等)一起关联存储于终端中,通过该某一个模型标识可以在终端中唯一获取到该模型标识对应的物体模型(例如该模型标识通过指针指向物体模型)。
基于目标物体模型的顶点信息的不同,指示该顶点信息的描述参数也不同,场景识别装置根据该描述参数确定待绘制的目标物体的目标物体模型的方式也不同,本申请实施例提供了两种确定待绘制的目标物体的目标物体模型的实现方式。
在第一种实现方式中,顶点信息包括顶点的属性信息。可选的,该属性信息可以为顶点集合。
每个物体模型均具有一定数量的顶点个数,每个顶点均对应一个通过三维坐标值表示的顶点集合,该顶点集合可以用于表征顶点的属性信息中的位置坐标值,一个顶点集合中包括一个或多个顶点的坐标值。该顶点个数以及具有该顶点个数的顶点所具有的一个顶点集合可以作为描述参数所指示的顶点信息,用于描述一个物体(以下描述中均将物体名称作为物体 标识来对物体进行表征)的物体模型。需要说明的是,一个顶点集合可以以一个顶点数组的形式表示,一个顶点数组指的是一个或多个以数组形式表示的顶点,其中数组指的是有序元素序列。
例如,对于一颗树木的物体模型来说,该物体模型所具有的顶点个数为361,该361个顶点所对应的一个顶点数组为{<-0.66891545,-0.29673234,-0.19876061>,<-0.5217651,0.022111386,-0.3163959>,<-0.84291315,-0.103498506,-0.14875318>,......},该一个顶点数组包括361个数组,每个数组代表一个顶点,每个数组包括3个元素,这3个元素表示一个顶点的坐标值,该顶点个数以及该一个顶点数组可以作为该树木的描述参数用于描述该树木的物体模型。
那么,确定待绘制的目标物体的目标物体模型的过程可以包括:
步骤Y1、场景识别装置从描述参数中获取m个顶点集合,m为正整数。
可选的,场景识别装置从该确定的描述参数中获取m组顶点数组的方式可以包括如下两种可选的实现方式:
第一种可选的实现方式中,场景识别装置可以获取描述参数中包含的所有顶点集合,将该所有顶点集合均作为该m个顶点集合。
第二种可选的实现方式中,场景识别装置可以筛选描述参数中的部分顶点集合以得到该m个顶点集合。示例的,m=10。
其中,第一种可选的实现方式可以遍历确定的描述参数中的所有顶点集合,保证了场景识别的准确性。但是,当确定的描述参数的数量较多时,若选取第一种可选的实现方式会导致终端的运算代价较大,因此,为了提高终端的运算效率,可以选取第二种可选的实现方式,即在确定的描述参数中筛选部分描述参数,将该部分描述参数中的顶点集合作为该m个顶点集合,以减少运算代价。
其中,筛选描述参数中的部分顶点集合的过程可以包括筛选描述参数中的部分描述参数,将该部分描述参数对应的顶点集合确定为m个顶点集合,进一步的,筛选描述参数中的部分描述参数的过程可以包括:在确定的描述参数中随机选取部分描述参数;或者,统计确定的描述参数所对应的绘制指令的出现频率,选取出现频率较小的绘制指令所对应的描述参数。由于出现频率较小的绘制指令所绘制的物体具有特殊性,有助于准确地确定出目标场景。
步骤Y2、场景识别装置从该m个顶点集合中,确定与已知的一个或多个物体模型的顶点集合匹配的目标顶点集合。
可选的,场景识别装置可以将该m个顶点集合中,与已知的一个或多个物体模型的顶点集合内容相同且包含的顶点个数相等的顶点集合确定为目标顶点集合,该确定目标顶点集合的过程可以包括:
步骤S1、场景识别装置分别将该m个顶点集合中的每个顶点集合作为第一顶点集合,在已知的一个或多个物体模型的顶点集合中,确定至少一个备选顶点集合,该至少一个备选顶点集合中的每个顶点集合均具有第一顶点个数,该已知的一个或多个物体模型的顶点集合中记录了已知的物体标识对应的多组顶点集合,该第一顶点个数为第一顶点集合所具有的顶点个数。
表1示意性的示出了记录有已知的多个物体模型的顶点集合与其他参数的对应关系,该其他参数包括索引标号、物体模型的顶点个数以及物体标识,在该顶点集合中,物体标识通 过物体的名称来表示,模型标识通过索引标号来表示,物体标识与索引标号一一对应,物体模型的顶点个数、物体模型的顶点集合和物体标识之间存在对应关系,通过物体模型的顶点个数和物体模型的顶点集合可以确定出该物体模型的顶点个数和物体模型的顶点集合所描述物体的物体标识。例如,物体标识为“国王”的物体模型所具有的顶点个数为1243,对应的顶点集合为{<-0.65454495,-0.011083424,-0.027084148>,<-0.8466003,0.026489139,-0.14481458>,<-0.84291315,-0.103498506,-0.14875318>…},且该物体标识为“国王”的物体模型的索引标号为25。
表1
Figure PCTCN2020074016-appb-000001
需要说明的是,表1只是一个示意性表格,实际实现时,上述表1中的其他参数可以仅包括索引标号、物体模型的顶点个数以及物体标识中的至少两种,或者包括索引标号、物体模型的顶点个数以及物体标识中的任意一种。
进一步的,步骤S1中,场景识别装置分别将该m个顶点集合中的每个顶点集合作为第一顶点集合,在已知的一个或多个物体模型的顶点集合中,确定至少一个备选顶点集合的过程包括:分别将该m个顶点集合中的每个顶点集合作为第一顶点集合,执行集合筛选过程。
其中,该集合筛选过程可以包括:
步骤B1、场景识别装置确定第一顶点集合的第一顶点个数。
场景识别装置在描述参数中选取任意一个顶点集合,确定该顶点集合所具有的第一顶点个数。
当然,场景识别装置也可以直接将绘制指令中提取的顶点个数作为该第一顶点个数,将该第一顶点数对应的顶点集合确定为第一顶点集合。
步骤B2、场景识别装置检测该已知的一个或多个物体模型的顶点集合中是否存在对应顶点个数为第一顶点个数的顶点集合。
步骤B3、当该已知的一个或多个物体模型的顶点集合中存在对应顶点数为第一顶点个数的顶点集合时,场景识别装置将对应顶点个数为第一顶点个数的顶点集合确定为备选顶点集合。
该备选顶点集合中的每一个顶点集合均具有该第一顶点个数。
步骤B4、当该已知的一个或多个物体模型的顶点集合中不存在对应顶点个数为第一顶点个数的顶点集合时,场景识别装置将第一顶点集合的下一个顶点集合更新为第一顶点集合,再次执行集合筛选过程。
本申请实施例在实际实现时,场景识别装置可以将该m个顶点集合建立为一个长度为m的队列,从队列的第一个元素开始,依次执行上述步骤B1至步骤B4的集合筛选的过程,直到遍历该队列中的所有元素。
步骤S2、场景识别装置将第一顶点集合与该至少一个备选顶点集合中的每个顶点集合进行比对,得到与第一顶点集合匹配的目标顶点集合。
示例的,该匹配指的是顶点集合内容相同且包含的顶点个数相等。
由于不同的物体模型可能出现具有相同顶点个数的情况。因此,在步骤S1中确定均具有第一顶点个数的至少一个备选顶点集合之后,还可以对该至少一个备选顶点集合通过比较顶点集合的内容(例如顶点集合中的坐标值),来进行进一步筛选,达到准确确定物体标识的效果。
步骤Y3、场景识别装置将该目标顶点集合所对应的物体模型确定为目标物体模型。
可选的,场景识别装置可以通过查询对应关系等方式确定该目标顶点集合所对应的模型标识,将该模型标识对应的物体模型确定为目标物体模型。该对应关系可以以表格方式存储。
例如,目标顶点集合为{<-0.66891545,-0.29673234,-0.19876061>,<-0.5217651,0.022111386,-0.3163959>,<-0.84291315,-0.103498506,-0.14875318>…},查询表1得到模型标识,也即索引标号为27,则目标物体模型为“树木”的模型。
本申请实施例在实际实现时,顶点个数相同的不同的物体模型,其各自具有的顶点集合其实具有较大的差异,因此,在上述第二种可选的实现方式中,筛选描述参数中的部分顶点集合作为该m个顶点集合即可保证场景识别的准确性,且减少了运算代价,提高了场景识别的效率。
在第二种实现方式中,描述参数指示的目标物体模型的顶点信息包括物体模型的顶点个数,同样,该目标物体模型为用于图像渲染的渲染模型,那么,根据至少一条绘制指令的描述参数确定待绘制的目标物体的目标物体模型,为已知的物体模型中顶点个数与描述参数指示的顶点个数相等的物体模型。
进一步的,场景识别装置确定待绘制的目标物体的目标物体模型的过程可以包括:
步骤Y4、场景识别装置从至少一条绘制指令的描述参数中获取n个顶点个数,n为正整数。
与前述步骤Y1类似,该n个顶点个数可以为场景识别装置确定的描述参数中包含的所有顶点个数,也可以为场景识别装置在确定的描述参数中筛选的部分顶点个数。
而为了减少运算代价,提高终端的运算效率,该n个顶点个数可以为描述参数中的部分顶点个数,示例的,n=10。
步骤Y5、场景识别装置分别将该n个顶点个数中的每个顶点个数作为第一顶点个数,在 已知的物体模型中,确定至少一个备选物体模型,至少一个物体模型中的每个物体模型均具有第一顶点个数,该已知的物体模型中记录了已知的一个或多个物体模型对应的顶点个数。
表2示意性的示出了记录有已知的多个物体模型的顶点个数与其他参数的对应关系,该其他参数包括索引标号以及物体标识,在该多个物体模型中,物体标识通过物体的名称来表示,物体模型通过索引标号来表示,物体标识与索引标号一一对应,物体模型的顶点个数和物体标识之间存在对应关系,通过物体模型的顶点个数可以确定出该物体模型的顶点个数所描述物体的物体标识。例如,物体标识为“国王”的物体模型所具有的顶点个数为1243,其索引标号为25。
表2
Figure PCTCN2020074016-appb-000002
需要说明的是,表2只是一个示意性表格,实际实现时,上述表2中的该其他参数可以仅包括索引标号或物体标识。
步骤Y6、当备选物体模型的个数为1时,场景识别装置将备选物体模型确定为第一顶点数对应的目标物体模型。
例如,场景识别装置确定的顶点数为1068个,在表2中,只有索引标号26对应的顶点数为1068,则可以确定,备选物体标识的个数为1个,场景识别装置将“士兵”的物体模型确定为第一顶点数对应的目标物体模型。
步骤Y7、当备选物体模型的个数大于1时,场景识别装置在备选物体标识中选择第一顶点个数对应的目标物体模型。
例如,场景识别装置确定的顶点个数为361个,在表2中,物体标识为27和28对应的顶点个数均为361,则可以确定,备选物体模型的个数为2个,场景识别装置要进一步在“树木”和“***”的物体模型为中选择目标物体模型。
可选的,场景识别装置在备选物体模型中选择目标物体模型的过程可以包括:
步骤C1、场景识别装置确定第一图像帧所属的目标虚拟环境。
该目标虚拟环境为当前需要绘制的图像帧(即第一图像帧)所属的虚拟环境,是指定应用程序在终端上生成的可交互的二维或三维环境。一个指定应用程序可以对应设置一个目标虚拟环境;不同的目标虚拟环境中可以对应不同的物体。例如,当该指定应用程序为游戏应用程序时,该目标虚拟环境为游戏应用程序中的游戏环境,不同的游戏应用程序可以具有不同种类的游戏环境,例如枪战类的游戏应用程序中通常包括***、步枪或者吉普车等物体;例如历史类的游戏应用程序中通常包括各种历史人物;再例如,当该指定应用程序为支付类应用程序时,该目标虚拟环境可以为交易环境,该交易环境中通常包括砝码或者货币等物体。
每个目标虚拟环境均可以对应多个物体模型,本申请实施例在实际实现时,可以在该多 个物体模型中挑选能够用于表征该目标虚拟环境的多个物体模型,该多个物体模型可以为该目标虚拟环境中出现的多个物体模型,或者为该目标虚拟环境中所独有的物体模型,以便进行目标物体模型的确定。
其中,该挑选能够用于表征该目标虚拟环境的多个物体模型的过程可以由应用程序的开发商完成,当终端在运行该应用程序时,终端中的场景识别装置可以自动获取该挑选的结果。
步骤C2、场景识别装置在备选物体模型中选择与目标虚拟环境相匹配的物体模型作为目标物体模型。
场景识别装置可以将备选物体模型与该目标虚拟环境下可能出现的多个物体模型进行比对,若某个备选物体模型与某个物体模型一致时,可以将该备选物体模型确定为目标物体模型。
例如,当前的目标虚拟环境为历史类的游戏环境,那么,当备选物体模型为“树木”和“***”的物体模型时,“树木”这一物体模型出现在了表征该历史类的游戏环境下可能出现的多个物体模型中,且明显地,“***”不可能出现在表征该历史类的游戏环境的多个物体模型中。因此,可以将“树木”确定为顶点数为361的目标物体模型。
或者,也可以将备选物体模型所属的类型与该目标虚拟环境下可能出现的多个物体模型所属的类型进行比对,若某个备选物体模型所属的类型与该目标虚拟环境下的某个类型一致时,可以将该备选物体模型确定为目标物体模型。
可选的,在步骤102之前,场景识别方法还可以包括:
步骤D1、场景识别装置确定目标虚拟环境,目标虚拟环境为第一图像帧所属的虚拟环境,第一图像帧为至少一条绘制指令所要绘制的目标物体所属的图像帧。
步骤D2、场景识别装置获取在目标虚拟环境下,描述参数与物体模型的第一对应关系。
该第一对应关系可以包括描述参数与物体模型的对应关系,也可以包括描述参数、物体模型以及物体的对应关系,也可以包括描述参数以及物体的对应关系,当然,由于物体与物体模型对应,因此该第一对应关系也可以包括描述参数与物体模型的对应关系。本申请实施例以该第一对应关系包括描述参数与物体模型的对应关系为例进行说明。
那么相应地,步骤1022中场景识别装置根据描述参数确定待绘制的目标物体的目标物体模型的过程可以包括:场景识别装置查询第一对应关系,获得与该描述参数对应的目标物体模型。
不同的目标虚拟环境可以对应不同的描述参数与物体模型的第一对应关系,因此,为了提高确定描述参数所描述物体的物体模型的准确性和效率,可以预先获取目标虚拟环境对应的第一对应关系。
可选的,由于不同的应用程序中所包括的物体标识之间存在较大差异,因此,应用程序的开发商可以提供该应用程序所对应的描述参数与物体模型的第一对应关系。
可选的,该第一对应关系可以存储于终端的本地存储设备中,例如该第一对应关系可以存储于终端的闪存(例如,终端为移动终端时,闪存可以为多媒体卡(Embedded Multi Media Card,eMMC))中,或者该第一对应关系可以存储于与目标虚拟环境关联的服务器中,例如存储于与目标虚拟环境关联的云端中。
那么,场景识别装置获取在目标虚拟环境下,描述参数与物体模型的第一对应关系的过程可以包括:查询本地存储设备中是否存储有该目标虚拟环境下的第一对应关系,当本地存 储设备未存储该目标虚拟环境下的第一对应关系,可以从与目标虚拟环境关联的服务器下载该目标虚拟环境下的第一对应关系,下载后的该目标虚拟环境下的第一对应关系可以存储于本地存储设备中。
该第一对应关系可以以一个数据库文件(例如gamemodel.db)的形式存储,可选的,每个目标虚拟环境可以对应一个数据库文件,本地存储设备中存储的第一对应关系(即数据库文件)可以按照目标虚拟环境的名称(或者应用程序的名称)设置索引项,场景识别装置可以通过目标虚拟环境的名称查询本地存储设备中是否存储有对应的第一对应关系,如此使得当该本地存储设备中存储有多个目标虚拟环境对应的第一对应关系时,通过目标虚拟环境的名称可以快速查询本地存储设备中是否存储有第一对应关系。
本申请实施例在实际实现时,确定目标虚拟环境的名称的过程可以包括:当启动指定应用程序时,该指定应用程序首先需要对其所对应的目标虚拟环境进行初始化,此时该指定应用程序需要调用OpenGL中的初始化接口来运行该指定应用程序对应的目标虚拟环境,该初始化接口中可以设置用于获取当前运行进程的进程名称的函数,再根据预先设置的进程名称与目标虚拟环境的名称的对应关系,确定目标虚拟环境的名称。
示例的,该指定应用程序可以调用egllnitialize接口对目标虚拟环境进行初始化,该egllnitialize接口中设置有用于获取进程的进程名称的回调函数(例如iGraphicsGameScenseInit函数),再根据预先设置的进程名称与目标虚拟环境的名称的对应关系,通过在该对应关系中比对查找当前进程的进程名称,确定出确定目标虚拟环境的名称。可选的,进程的名称与目标虚拟环境的名称的对应关系可以按照xml表格的形式进行存储。
示例的,假设目标虚拟环境为游戏应用程序中的游戏场景,xml表格可以按照如下方式存储:
Figure PCTCN2020074016-appb-000003
该xml表格中存储的进程名称为“com.tent.tmgp.sgame”对应的目标虚拟环境的名称为“AAAA”。
可选的,场景识别装置可以在安装有指定应用程序的终端第一次运行该指定应用程序时,再获取该第一对应关系。如此设置不但使得终端中存储的第一对应关系可以自适应自增长,还可以最大限度地节省终端的内存空间。并且,随着指定应用程序的升级发展,终端中或者与目标虚拟环境关联的服务器中可以扩展升级相应的第一对应关系,以便场景识别装置获取并实现新场景的识别。
当然,当该第一对应关系符合删除条件时,可以将该第一对应关系删除。该删除条件可以为指定应用程序被卸载或者用户触发了删除指令等,有效缩减了该第一对应关系在终端内所占用的内存空间。
步骤103、场景识别装置根据该目标物体模型确定对应的目标场景。
在一种可选示例中,场景识别装置根据该目标模型标识(即目标物体模型的模型标识)可以通过查询对应关系等方式确定对应的目标场景。例如,场景识别装置预先存储有模型标识与场景的对应关系,基于目标物体模型标识即可查询得到目标场景。
在另一种可选示例中,场景识别装置确定获取的物体模型所对应的目标场景的过程可以包括:场景识别装置根据目标物体模型和目标虚拟环境下该目标物体模型的场景判断策略,确定该获取的物体模型所对应的目标场景。那么,在上述步骤D1中场景识别装置在确定目标虚拟环境之后,该方法还可以包括:
步骤D3、场景识别装置在多组关系集合中,获取在目标虚拟环境下的场景判断策略,多组关系集合记录有不同的多个虚拟环境下的场景判断策略,每个虚拟环境中可以对应多个场景判断策略,场景判断策略为基于物体模型判断对应场景的策略。
与第一对应关系类似,该多组关系集合可以存储于终端的本地存储设备中,例如存储于终端的闪存(例如,终端为移动终端时,闪存可以为多媒体卡)中,或者该第一对应关系可以存储于与目标虚拟环境关联的服务器中,例如存储于与目标虚拟环境关联的云端中。场景识别装置在多组关系集合中,获取在目标虚拟环境下的场景判断策略的过程可以包括:查询本地存储设备中是否存储有该目标虚拟环境下的场景判断策略,当本地存储设备未存储该目标虚拟环境下的场景判断策略,从与目标虚拟环境关联的服务器下载该目标虚拟环境下的场景判断策略,下载后的该目标虚拟环境下的场景判断策略可以存储于本地存储设备中。并且,随着指定应用程序的升级发展,多组关系集合中可以扩展升级相应的场景判断策略,以便场景识别装置获取并实现新场景的识别。该多组关系集合中的场景判断策略可以根据实际需要进行新增或者修改,该多组关系集合中的场景判断策略可以是指定应用程序中的关键场景等。
当然,当场景判断策略符合删除条件时,将该场景判断策略删除。该删除条件可以为指定应用程序被卸载或者用户触发了删除指令等,有效缩减了该场景判断策略在终端内所占用的内存空间。
图4示出了场景识别装置根据获取的模型标识确定对应的目标场景的示意图。场景识别装置根据上述步骤102所获取的模型标识以及获取到的在获取到该目标虚拟环境下的场景判断策略确定目标场景。获取的模型标识为“模型1”、“模型2”以及“模型3”,获取到的目标虚拟环境下的场景判断策略中包括4个场景判断策略,分别为“场景判断策略1”、“场景判断策略2”、“场景判断策略3”以及“场景判断策略4”,其中,“场景判断策略3”中存储有模型标识为“模型1”、“模型2”以及“模型3”与目标场景的对应关系,因此,基于该“场景判断策略3”可以确定出获取的模型标识所对应的目标场景。
可选的,目标虚拟环境可以为游戏环境,场景判断策略可以包括以下中的一项或多项:当目标物体模型包括***和子弹时,确定目标场景为开枪场景;或,当目标物体模型包括至少三个角色时,确定目标场景为团战场景,可选的,该角色可以是英雄角色等;或,当第一图像帧中的目标物体模型不包括目标角色时,且第一图像帧的上一幅图像帧中的目标物体模型包括目标角色时,确定目标场景为目标角色出现的场景,可选的,该目标角色可以是敌人角色。
示例的,请参考图5,其示出了一种目标场景为开枪场景的示意图。根据前述步骤101至步骤102可以确定描述参数所描述物体的模型标识为“***”a1和“子弹”a2,基于上述场景判断策略可以确定目标场景为开枪场景。图5中示意性地示出了“***”a1的物体模型 以及“子弹”a2的物体模型。
示例的,请参考图6,其示出了一种目标场景为团战场景的示意图,根据前述步骤101至步骤103可以获取到确定的描述参数所描述物体的模型标识为英雄a3、英雄a4以及英雄a5,基于上述场景判断策略可以确定目标场景为团战场景。图6中示意性地示出了英雄a3的物体模型、英雄a4的物体模型以及英雄a5的物体模型。针对该目标场景可以增加相应的特效,例如通过游戏SDK接口向队友自动发送集合求助消息。如此使得用户在使用该指定应用程序时无需用户操作,即可自动识别团战场景并及时通知队友集合团战信息,提高了团战获胜概率,以此增强用户体验。
由于终端无需进行额外的运算,使得终端负载较小,可以实时识别大型复杂的场景,例如该场景为游戏环境中的60帧每秒的三维场景。
本申请实施例在实际实现时,可以在终端操作***的API接口层设置用于查询当前场景识别结果的接口,通过调用该用于查询当前场景识别结果的接口可以执行本申请实施例所提供的场景识别方法以准确获取当前场景。因此,当指定应用程序中设置有用于增强游戏特效的模块时,该模块通过调用该用于查询当前场景识别结果的接口可以为每个场景准确设置特效,有效增强了用户体验;当指定应用程序中设置有用于投放广告的模块时,该模块通过调用该用于查询当前场景识别结果的接口可以为每个场景准确地投放广告。其中,该广告可以是与指定应用程序无关的第三方广告,如此有针对性地投放广告可以有效增加广告投放商的收益;或者该广告可以是与指定应用程序场景有关的内部广告,例如在游戏应用程序中的打斗场景中,投放的广告可以是出售虚拟装备的广告,如此可以增加用户使用该游戏的体验。
值得说明的是,在上述确定目标场景为敌人出现场景的场景判断策略中,第一图像帧为两个图像帧。若游戏环境中“敌人”角色(也即是模型标识所指示的模型为“敌人”时,例如该模型标识为“敌人”)在游戏场景中比较远的位置出现,在具有较小的显示界面的终端上显示出来的“敌人”角色很小,用户的肉眼难以识别,用户难以确定当前的目标场景。
示例的,请参考图7,其示出了一种目标场景为敌人出现的场景的示意图,图7左侧示出了一帧存在敌人角色的图像帧,图7右侧示出了该图像帧中的敌人角色放大之后的图像。从该图7中可以明确看出,当敌人角色在画面中较远的位置时,人肉眼难以察觉。但是若从绘制指令的角度来看,对于相同的物体,不管是从远处看还是从近处看,绘制该物体的绘制指令均形同,因此,通过判断连续的两个图像帧中绘制“敌人”角色的绘制指令,可以确定目标场景为敌人出现的场景。针对该目标场景可以增加相应的特效,例如呈现告警信息,该告警信息可以以语音、振动、光信号或标志图像的方式呈现,例如屏幕四周边缘出现红色闪烁报警,并维持3秒等。当然,该场景判断策略也可以应用于识别场景中远处的微小人物或者微小物体等。该场景判断策略有效适用于屏幕较小的终端,例如移动终端等,有助于提升用户体验。
可选的,本申请实施例在实际实现时,终端中的场景识别装置在应用程序运行的过程中若持续使用该场景识别方法进行场景识别,会消耗终端操作***的性能并占用终端操作***的资源,因此,可以为该场景识别方法设置相应的开关,该开关可以对应该场景识别方法的启用状态。则在执行上述步骤101之前,场景识别装置还可以确定当前场景识别方法的启用状态,当该场景识别方法的启用状态为已启用时,再执行上述步骤101至步骤103的场景识别方法。该启用状态可以由用户开启或者由场景识别装置自动开启。
需要说明的是,在上述步骤102中,若采用第一种实现方式获取确定的描述参数所描述物体的物体模型,可以首先通过获取的顶点集合具有的第一顶点数在已知的一个或多个物体模型的顶点集合中进行筛选,得到至少一个备选顶点集合,然后再通过获取的顶点集合与该至少一个备选顶点集合进行逐一比对,得到与获取的顶点集合内容相同的目标顶点集合。由于不同的物体模型的顶点数一般不相同,因此,首先通过顶点数进行筛选可以快速且高效地排除大部分不匹配的顶点集合,再此基础上再通过顶点集合进行筛选便可以准确获取到确定的描述参数所描述物体的物体标识。进一步的,在此基础上执行的步骤103中,确定的目标场景的过程也可以更加高效,确定的目标场景也可以更加准确。
进一步的,由于本申请实施例所提供的场景识别方法可以高效识别出场景而无需消耗的大量的计算资源,因此,终端可以实时使用该场景识别方法进行场景识别。
值得说明的是,由于目标物体对应至少一个目标物体模型,而目标物体标识可以唯一标识目标物体,因此,目标物体的物体标识可以对应至少一个目标物体模型的标识,本申请实施例提供的场景识别方法,在第一种可选实现方式中,场景识别装置根据描述参数确定待绘制的目标物体的目标物体模型后,直接基于该目标物体模型即可确定目标场景;在第二种可选实现方式中,场景识别装置根据描述参数确定待绘制的目标物体的目标物体模型后,基于该目标物体模型先确定目标物体(也即是确定目标物体标识),然后基于目标物体确定目标场景。前述实施例是以第一种可选实现方式为例进行说明的,后续实施例以第二种可选实现方式为例进行简单说明。如图8所示,该场景识别方法包括:
步骤201、场景识别装置获取至少一条绘制指令,该绘制指令用于绘制目标物体。
步骤201的具体过程可以参考上述步骤101,本申请实施例对此不再赘述。
步骤202、场景识别装置根据该至少一条绘制指令的描述参数确定待绘制的目标物体的目标物体模型。
步骤202的具体过程可以参考上述步骤102,本申请实施例对此不再赘述。
步骤203、场景识别装置根据该目标物体模型确定对应的目标物体。
示例的,场景识别装置根据该目标物体模型可以通过查询对应关系等方式确定对应的目标物体,该对应关系可以以表格方式存储。例如,确定的目标物体模型的索引标号为26,查询表1,得到目标物体的物体标识为“士兵”。
步骤204、场景识别装置根据该目标物体确定对应的目标场景。
在一种可选示例中,场景识别装置根据该目标物体标识(即目标物体的物体标识)可以通过查询对应关系等方式确定对应的目标场景。例如,场景识别装置预先存储有物体标识与场景的对应关系,基于目标物体的物体标识即可查询得到目标场景。
在另一种可选示例中,场景识别装置确定获取的目标物体所对应的目标场景的过程可以包括:场景识别装置根据目标物体标识和目标虚拟环境下该目标物体标识的场景判断策略,确定该获取的目标物体所对应的目标场景。那么,在步骤201之前,场景识别装置在多组关系集合中,获取在目标虚拟环境下的场景判断策略,多组关系集合记录有不同的多个虚拟环境下的场景判断策略,每个虚拟环境中可以对应多个场景判断策略,场景判断策略为基于物体标识判断对应场景的策略。该场景判断策略的获取过程可以参考上述步骤D3。如前述第一种实现方式,当场景判断策略符合删除条件时,将该场景判断策略删除。
值得说明的是,当物体与物体模型存在一一对应关系时,物体标识和模型标识可以为相同的标识,则确定目标物体和确定目标物体模型的过程基本一致。
需要说明的是,本申请实施例提供的场景识别方法步骤的先后顺序可以进行适当调整,步骤也可以根据情况进行相应增减,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化的方法,都应涵盖在本申请的保护范围之内,因此不再赘述。
综上所述,本申请实施例提供的场景识别方法,通过获取至少一条用于绘制第一图像帧中的物体的绘制指令,再基于该至少一条绘制指令中每条绘制指令,确定用于描述每条绘制指令对应物体的描述参数,最后根据确定的描述参数所描述物体的物体模型确定所对应的目标场景。由于目标场景是根据从绘制指令中确定的描述参数确定的,终端无需进行额外的运算,相较于相关技术中采用的图像识别技术所消耗的大量的计算资源,本申请实施例所提供场景识别方法有效节约了***资源。
并且,本申请实施例提供的场景识别方法不依赖于终端生产厂商开放应用程序接口,也不依赖于游戏设计的逻辑规律,对于使用OpenGL的操作***具有平台通用性。
参见图9,本申请实施例提供了一种场景识别装置200,该装置200包括:
第一获取模块201,用于获取至少一条绘制指令,所述绘制指令用于绘制目标物体;
第一确定模块202,用于根据所述至少一条绘制指令的描述参数确定待绘制的所述目标物体的目标物体模型,所述目标物体模型为用于图像绘制的渲染模型,所述描述参数指示所述目标物体模型的顶点信息;
第二确定模块203,用于根据所述目标物体模型确定对应的目标场景。
可选的,目标物体模型为将所述顶点信息与已知的一个或多个物体模型的顶点信息匹配所得。
可选的,所述顶点信息包括顶点集合,一个顶点集合中包括一个或多个顶点的坐标值,则如图10所示,图10示出了本申请实施例提供的一种第一确定模块202的结构示意图,该第一确定模块202,包括:
第一获取子模块2031,用于从所述描述参数中获取m个顶点集合,m为正整数;
第一确定子模块2032,用于从所述m个顶点集合中,确定与已知的一个或多个物体模型的顶点集合匹配的目标顶点集合;
第二确定子模块2033,用于将所述目标顶点集合所对应的物体模型确定为所述目标物体模型。
可选的,该第一确定子模块2032,用于将所述m个顶点集合中,与已知的一个或多个物体模型的顶点集合内容相同且包含的顶点个数相等的顶点集合确定为所述目标顶点集合。
可选的,该第一获取子模块2031,用于:
获取所述描述参数中包含的所有顶点集合;或者,
筛选所述描述参数中的部分顶点集合以得到所述m个顶点集合。
可选的,所述顶点信息包括顶点个数,所述目标物体模型为已知的物体模型中顶点个数与所述描述参数指示的顶点个数相等的物体模型。
可选的,所述顶点信息包括顶点个数,所述目标物体模型为已知的物体模型中顶点个数与所述描述参数指示的顶点个数相等的物体模型。
可选的,该第一获取模块201,用于通过监听OpenGL接口的方式获取所述至少一条绘制指令,其中,所述OpenGL接口为OpenGL和应用程序之间的接口。
可选的,参见图11,本申请实施例提供了一种场景识别装置200,该装置200还包括:
第三确定模块204,用于根据所述至少一条绘制指令的描述参数确定待绘制的所述目标物体的目标物体模型之前,确定目标虚拟环境,所述目标虚拟环境为第一图像帧所属的虚拟环境,所述第一图像帧为所述至少一条绘制指令所要绘制的目标物体所属的图像帧;
第三获取模块205,用于获取在所述目标虚拟环境下,描述参数与物体模型的第一对应关系;
所述第一确定模块202,用于查询所述第一对应关系,获得与所述描述参数对应的所述目标物体模型。
可选的,参见图12,装置200还包括:
第四获取模块206,用于在所述确定目标虚拟环境之后,在多组关系集合中,获取在所述目标虚拟环境下的场景判断策略,所述多组关系集合记录有不同的多个虚拟环境下的场景判断策略,所述场景判断策略为基于物体模型判断对应场景的策略;
所述第二确定模块203,用于:
根据所述目标物体模型和所述目标虚拟环境下所述目标物体模型的场景判断策略,确定所述目标场景。
可选的,所述目标虚拟环境为游戏环境,所述场景判断策略包括以下中的一项或多项:
当所述目标物体模型包括***和子弹时,确定所述目标场景为开枪场景;或
当所述目标物体模型包括至少三个角色时,确定所述目标场景为团战场景;或
当所述第一图像帧中的目标物体模型包括目标角色时,且所述第一图像帧的上一幅图像帧中的目标物体模型不包括所述目标角色时,确定所述目标场景为所述目标角色出现的场景。
可选的,第三获取模块205,用于:
查询本地存储设备中存储的所述第一对应关系;或者从与所述目标虚拟环境关联的服务器下载所述第一对应关系。
综上所述,本申请实施例提供的场景识别装置,通过获取至少一条用于绘制目标物体的绘制指令,再根据该至少一条绘制指令的描述参数确定待绘制的目标物体的目标物体模型,最后根据该目标物体模型确定所对应的目标场景。由于目标场景是根据目标物体模型确定,而该目标物体模型是根据绘制指令的描述参数确定的,终端无需进行额外的运算,相较于相关技术中采用的图像识别技术所消耗的大量的计算资源,本申请实施例所提供场景识别方法有效节约了***资源。
所属领域的技术人员可以清楚地了解到,上述实施例提供的场景识别装置与场景识别方法实施例属于同一构思,上述描述的装置、模块和子模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
并且,以上装置中的各个模块可以通过软件或硬件或软硬件结合的方式来实现。当至少一个模块是硬件的时候,该硬件可以是逻辑集成电路模块,可具体包括晶体管、逻辑门阵列或算法逻辑电路等。至少一个模块是软件的时候,该软件以计算机程序产品形式存在,并被存储于计算机可读存储介质中。该软件可以被一个处理器执行。因此可替换地,场景识别装 置,可以由一个处理器执行软件程序来实现,本实施例对此不限定。
请参考图13,图13中终端划分为软件部分和硬件部分,在终端中,软件部分包括:缓存、***API接口层、场景识别模块、***数据库,其中,缓存具有查询缓存的绘制指令的接口,也即是缓存对场景识别模块提供查询当前场景对应的图像帧的缓存的绘制指令的接口;***API接口层包括扩展接口,该扩展接口中包括场景查询接口,其他模块可以通过调用该接口查询到当前指定应用的场景;***数据库包括模型数据库,该模型数据库中按照不同的应用程序分别单独存储了常见的物体对应的物体模型的描述参数;场景识别模块通过分析获取到的绘制指令,与模型数据库中的模型进行对比匹配并判断识别出当前的场景。该场景识别模块可以实现前述场景识别装置的功能。硬件部分包括内存和EMMC,该EMMC中存在有模型数据库文件。
示例的,以终端为智能手机且该智能手机的操作***为安卓(Android)***为例,请参考图14,图14中终端划分为软件部分和硬件部分,在Android智能手机中,软件部分包括:缓存、***API接口层、场景识别模块、***数据库,其中,***API接口层包括Anroid扩展接口,该扩展接口中包括场景查询接口,示例的,智能手机中可以设置4D游戏特效加强模块,该4D游戏特效加强模块可以调用该场景查询接口来查询当前场景,并根据该场景增加震动或者特殊音效等4D特效;缓存、***数据库以及场景识别模块的结构和功能可以参考前述图13。硬件部分的结构和功能可以参考前述图13。
本申请实施例还提供了一种场景识别装置,包括处理器和存储器;在处理器执行存储器存储的计算机程序时,场景识别装置执行本申请实施例提供的场景识别方法。可选地,该场景识别装置可以部署在电子成像设备中。
本申请示例性实施例还提供了一种终端,该终端可以包括:处理器及用于存储可在处理器上运行的计算机程序的存储器,处理器执行计算机程序时,用于实现本申请上述实施例提供的任一的场景识别方法。例如:处理器被配置为:获取至少一条绘制指令,所述绘制指令为用于绘制第一图像帧中的物体的指令;基于所述至少一条绘制指令中每条绘制指令,确定用于描述所述每条绘制指令对应物体的描述参数;获取确定的描述参数所描述物体的物体标识;确定获取的物体标识所对应的目标场景。
具体地,请参考图15,其示出了本申请示例性实施例涉及的一种终端300的结构示意图,该终端300可以包括:处理器302和信号接口304。
处理器302包括一个或者一个以上处理核心。处理器302通过运行软件程序以及模块,从而执行各种功能应用以及数据处理。处理器302可以包括CPU和GPU,还可以进一步选择性地包括执行运算所需的硬件加速器,如各种逻辑运算电路。
信号接口304可以为多个,该信号接口304用于与其它装置或模块建立连接,例如:可以通过该信号接口304与收发机进行连接。因此,可选地,该装置300还可包括所述收发机(图中未示出)。该收发机具体执行信号收发。当处理器302需要执行信号收发操作的时候可以调用或驱动收发机执行相应收发操作。因此,当装置300进行信号收发的时候,处理器302用于决定或发起收发操作,相当于发起者,而收发机用于具体收发执行,相当于执行者。该 收发机也可以是收发电路、射频电路或射频单元,本实施例对此不限定。
可选的,终端300还包括存储器306、总线308等部件。其中,存储器306与信号接口304分别通过总线308与处理器302相连。
存储器306可用于存储软件程序以及模块。具体的,存储器306可存储至少一个功能所需的程序模块3062。该存储器可以为随机存取存储器(Random Access Memory,RAM)或DDR。该程序可以是应用程序或驱动程序。
其中,该程序模块3062可以包括:
第一获取单元30621,具有与第一获取模块201相同或相似的功能。
第一确定单元30622,具有与第一确定模块202相同或相似的功能。
第二确定单元30623,具有与第二确定模块203相同或相似的功能。
本申请实施例还提供了一种存储介质,该存储介质可以为非易失性计算机可读存储介质,存储介质内存储有计算机程序,该计算机程序指示终端执行本申请实施例提供的任一的场景识别方法。该存储介质可以包括:只读存储器(read-only memory,ROM)或随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可存储程序代码的介质。
本申请实施例还提供了一种包含指令的计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行本申请实施例提供的场景识别方法。该计算机程序产品可以包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者通过该计算机可读存储介质进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
需要说明的是:上述实施例提供的场景识别装置在进行场景识别时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将终端的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (25)

  1. 一种场景识别方法,其特征在于,所述方法包括:
    获取至少一条绘制指令,所述绘制指令用于绘制目标物体;
    根据所述至少一条绘制指令的描述参数确定待绘制的所述目标物体的目标物体模型,所述目标物体模型为用于图像绘制的渲染模型,所述描述参数指示所述目标物体模型的顶点信息;
    根据所述目标物体模型确定对应的目标场景。
  2. 根据权利要求1所述的方法,其特征在于,所述目标物体模型为将所述顶点信息与已知的一个或多个物体模型的顶点信息匹配所得。
  3. 根据权利要求2所述的方法,其特征在于,所述顶点信息包括顶点集合,一个顶点集合中包括一个或多个顶点的坐标值;
    所述根据所述至少一条绘制指令的描述参数确定待绘制的所述目标物体的目标物体模型,包括:
    从所述描述参数中获取m个顶点集合,m为正整数;
    从所述m个顶点集合中,确定与已知的一个或多个物体模型的顶点集合匹配的目标顶点集合;
    将所述目标顶点集合所对应的物体模型确定为所述目标物体模型。
  4. 根据权利要求3所述的方法,其特征在于,所述从所述m个顶点集合中,确定与已知的一个或多个物体模型的顶点集合匹配的目标顶点集合,包括:
    将所述m个顶点集合中,与已知的一个或多个物体模型的顶点集合内容相同且包含的顶点个数相等的顶点集合确定为所述目标顶点集合。
  5. 根据权利要求3或4所述的方法,其特征在于,所述从所述描述参数中获取m个顶点集合,包括:
    获取所述描述参数中包含的所有顶点集合;或者,
    筛选所述描述参数中的部分顶点集合以得到所述m个顶点集合。
  6. 根据权利要求2所述的方法,其特征在于,所述顶点信息包括顶点个数,所述目标物体模型为已知的一个或多个物体模型中顶点个数与所述描述参数指示的顶点个数相等的物体模型。
  7. 根据权利要求1至6任一所述的方法,其特征在于,所述目标物体模型还满足如下条件:所述目标物体模型与目标虚拟环境相匹配,其中,所述目标虚拟环境为生成所述至少一条绘制指令的应用程序所建立的二维或三维虚拟环境。
  8. 根据权利要求1至7任一所述的方法,其特征在于,所述获取至少一条绘制指令,包括:通过监听OpenGL接口的方式获取所述至少一条绘制指令,其中,所述OpenGL接口为OpenGL和应用程序之间的接口。
  9. 根据权利要求1所述的方法,其特征在于,在根据所述至少一条绘制指令的描述参数确定待绘制的所述目标物体的目标物体模型之前,所述方法还包括:
    确定目标虚拟环境,所述目标虚拟环境为第一图像帧所属的虚拟环境,所述第一图像帧为所述至少一条绘制指令所要绘制的目标物体所属的图像帧;
    获取在所述目标虚拟环境下,描述参数与物体模型的第一对应关系;
    所述根据所述至少一条绘制指令的描述参数确定待绘制的所述目标物体的目标物体模型,包括:
    查询所述第一对应关系,获得与所述描述参数对应的所述目标物体模型。
  10. 根据权利要求9所述的方法,其特征在于,在所述确定目标虚拟环境之后,所述方法还包括:
    在多组关系集合中,获取在所述目标虚拟环境下的场景判断策略,所述多组关系集合记录有多个虚拟环境下的场景判断策略,所述场景判断策略为基于物体模型判断对应场景的策略;
    所述根据所述目标物体模型确定对应的目标场景,包括:
    根据所述目标物体模型和所述目标虚拟环境下所述目标物体模型的场景判断策略,确定所述目标场景。
  11. 根据权利要求10所述的方法,其特征在于,所述目标虚拟环境为游戏环境,所述场景判断策略包括以下中的一项或多项:
    当所述目标物体模型包括***和子弹分别对应的物体模型时,确定所述目标场景为开枪场景;或
    当所述目标物体模型包括至少三个角色分别对应的物体模型时,确定所述目标场景为团战场景;或
    当所述第一图像帧中的目标物体模型包括目标角色时,且所述第一图像帧的上一幅图像帧中的目标物体模型不包括所述目标角色时,确定所述目标场景为所述目标角色出现的场景。
  12. 根据权利要求9所述的方法,其特征在于,所述获取在目标虚拟环境下,描述参数与物体模型的第一对应关系,包括:
    查询本地存储设备中存储的所述第一对应关系;或者从与所述目标虚拟环境关联的服务器下载所述第一对应关系。
  13. 一种场景识别装置,其特征在于,所述装置包括:
    第一获取模块,用于获取至少一条绘制指令,所述绘制指令用于绘制目标物体;
    第一确定模块,用于根据所述至少一条绘制指令的描述参数确定待绘制的所述目标物体的目标物体模型,所述目标物体模型为用于图像绘制的渲染模型,所述描述参数指示所述目标物体模型的顶点信息;
    第二确定模块,用于根据所述目标物体模型确定对应的目标场景。
  14. 根据权利要求13所述的装置,其特征在于,所述目标物体模型为将所述顶点信息与已知的一个或多个物体模型的顶点信息匹配所得。
  15. 根据权利要求14所述的装置,其特征在于,所述顶点信息包括顶点集合,一个顶点集合中包括一个或多个顶点的坐标值;
    所述第一确定模块,包括:
    第一获取子模块,用于从所述描述参数中获取m个顶点集合,m为正整数;
    第一确定子模块,用于从所述m个顶点集合中,确定与已知的一个或多个物体模型的顶点集合匹配的目标顶点集合;
    第二确定子模块,用于将所述目标顶点集合所对应的物体模型确定为所述目标物体模型。
  16. 根据权利要求15所述的装置,其特征在于,
    所述第一确定子模块,用于:
    将所述m个顶点集合中,与已知的一个或多个物体模型的顶点集合内容相同且包含的顶点个数相等的顶点集合确定为所述目标顶点集合。
  17. 根据权利要求15或16所述的装置,其特征在于,
    所述第一获取子模块,用于:
    获取所述描述参数中包含的所有顶点集合;或者,
    筛选所述描述参数中的部分顶点集合以得到所述m个顶点集合。
  18. 根据权利要求14所述的装置,其特征在于,
    所述顶点信息包括顶点个数,所述目标物体模型为已知的物体模型中顶点个数与所述描述参数指示的顶点个数相等的物体模型。
  19. 根据权利要求13至18任一所述的装置,其特征在于,所述目标物体模型还满足如下条件:所述目标物体模型与目标虚拟环境相匹配,其中,所述目标虚拟环境为生成所述至少一条绘制指令的应用程序所建立的二维或三维虚拟环境。
  20. 根据权利要求13至19任一所述的装置,其特征在于,
    所述第一获取模块,用于:通过监听OpenGL接口的方式获取所述至少一条绘制指令,其中,所述OpenGL接口为OpenGL和应用程序之间的接口。
  21. 根据权利要求13所述的装置,其特征在于,所述装置还包括:
    第三确定模块,用于在根据所述至少一条绘制指令的描述参数确定待绘制的所述目标物体的目标物体模型之前,确定目标虚拟环境,所述目标虚拟环境为第一图像帧所属的虚拟环境,所述第一图像帧为所述至少一条绘制指令所要绘制的目标物体所属的图像帧;
    第三获取模块,用于获取在所述目标虚拟环境下,描述参数与物体模型的第一对应关系;
    所述第一确定模块,用于:
    查询所述第一对应关系,获得与所述描述参数对应的所述目标物体模型。
  22. 根据权利要求21所述的装置,其特征在于,所述装置还包括:
    第四获取模块,用于在所述确定目标虚拟环境之后,在多组关系集合中,获取在所述目标虚拟环境下的场景判断策略,所述多组关系集合记录有多个虚拟环境下的场景判断策略,所述场景判断策略为基于物体模型判断对应场景的策略;
    所述第二确定模块,用于:
    根据所述目标物体模型和所述目标虚拟环境下所述目标物体模型的场景判断策略,确定所述目标场景。
  23. 根据权利要求21所述的装置,其特征在于,所述第三获取模块,用于:
    查询本地存储设备中存储的所述第一对应关系;或者从与所述目标虚拟环境关联的服务器下载所述第一对应关系。
  24. 一种终端,其特征在于,包括处理器和存储器;
    在所述处理器执行所述存储器存储的计算机程序以实现权利要求1至12任一所述的场景识别方法。
  25. 一种存储介质,其特征在于,所述存储介质内存储有计算机程序,所述计算机程序用于实现如权利要求1至12任一所述的场景识别方法。
PCT/CN2020/074016 2019-02-01 2020-01-23 场景识别方法及装置、终端、存储介质 WO2020156487A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20748486.6A EP3905204A4 (en) 2019-02-01 2020-01-23 METHOD AND APPARATUS FOR SCENE RECOGNITION, TERMINAL AND STORAGE MEDIA
US17/389,688 US11918900B2 (en) 2019-02-01 2021-07-30 Scene recognition method and apparatus, terminal, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910105807.4 2019-02-01
CN201910105807.4A CN111598976B (zh) 2019-02-01 2019-02-01 场景识别方法及装置、终端、存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/389,688 Continuation US11918900B2 (en) 2019-02-01 2021-07-30 Scene recognition method and apparatus, terminal, and storage medium

Publications (1)

Publication Number Publication Date
WO2020156487A1 true WO2020156487A1 (zh) 2020-08-06

Family

ID=71840258

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/074016 WO2020156487A1 (zh) 2019-02-01 2020-01-23 场景识别方法及装置、终端、存储介质

Country Status (4)

Country Link
US (1) US11918900B2 (zh)
EP (1) EP3905204A4 (zh)
CN (1) CN111598976B (zh)
WO (1) WO2020156487A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112657201A (zh) * 2020-12-23 2021-04-16 上海米哈游天命科技有限公司 一种角色臂长确定方法、装置、设备及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112807695B (zh) * 2021-02-24 2024-05-28 网易(杭州)网络有限公司 游戏场景生成方法和装置、可读存储介质、电子设备
CN113262466A (zh) * 2021-05-11 2021-08-17 Oppo广东移动通信有限公司 震动控制方法、装置、移动终端及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110251896A1 (en) * 2010-04-09 2011-10-13 Affine Systems, Inc. Systems and methods for matching an advertisement to a video
KR20180091794A (ko) * 2018-08-07 2018-08-16 브루비스 멀티 미디어 크리에이티브 컴퍼니 리미티드 투사 맵핑 방법
CN108550190A (zh) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 增强现实数据处理方法、装置、计算机设备和存储介质

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006202083A (ja) * 2005-01-21 2006-08-03 Seiko Epson Corp 画像データ生成装置、および印刷装置
US8655052B2 (en) * 2007-01-26 2014-02-18 Intellectual Discovery Co., Ltd. Methodology for 3D scene reconstruction from 2D image sequences
JP2010540002A (ja) * 2007-08-20 2010-12-24 ダブル・フュージョン・インコーポレイテッド 後で統合されたコードを使用するソフトウエア・エグゼクタブルからの出力の独立に定義された変更
CN101770655B (zh) * 2009-12-25 2012-04-25 电子科技大学 一种大规模虚拟动态场景简化方法
CN102157008B (zh) * 2011-04-12 2014-08-06 电子科技大学 一种大规模虚拟人群实时绘制方法
US9715761B2 (en) * 2013-07-08 2017-07-25 Vangogh Imaging, Inc. Real-time 3D computer vision processing engine for object recognition, reconstruction, and analysis
US9569885B2 (en) * 2014-01-02 2017-02-14 Nvidia Corporation Technique for pre-computing ambient obscurance
US10438312B2 (en) * 2014-04-05 2019-10-08 Sony Interactive Entertainment LLC Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters
CN104050708A (zh) * 2014-06-09 2014-09-17 无锡梵天信息技术股份有限公司 一种3d游戏引擎lod***的实现方法
CN115690558A (zh) * 2014-09-16 2023-02-03 华为技术有限公司 数据处理的方法和设备
CN105353871B (zh) * 2015-10-29 2018-12-25 上海乐相科技有限公司 一种虚拟现实场景中目标物体的控制方法及装置
US20170278308A1 (en) * 2016-03-23 2017-09-28 Intel Corporation Image modification and enhancement using 3-dimensional object model based recognition
CN106447768B (zh) * 2016-10-13 2020-06-19 自然资源部国土卫星遥感应用中心 一种适用于三维场景中三维模型并行绘制的方法
CN107019901B (zh) 2017-03-31 2020-10-20 北京大学深圳研究生院 基于图像识别及自动化控制的棋牌类游戏自动博弈机器人的建立方法
CN108876925B (zh) * 2017-05-09 2022-03-04 北京京东尚科信息技术有限公司 虚拟现实场景处理方法和装置
CN111768496B (zh) * 2017-08-24 2024-02-09 Oppo广东移动通信有限公司 图像处理方法、装置、服务器及计算机可读存储介质
CN107670279A (zh) * 2017-10-26 2018-02-09 天津科技大学 基于WebGL的3D网页游戏的开发方法及***
CN108176048B (zh) * 2017-11-30 2021-02-19 腾讯科技(深圳)有限公司 图像的处理方法和装置、存储介质、电子装置
CN108434742B (zh) * 2018-02-02 2019-04-30 网易(杭州)网络有限公司 游戏场景中虚拟资源的处理方法和装置
CN108421257B (zh) * 2018-03-29 2021-02-12 网易(杭州)网络有限公司 不可见元素的确定方法、装置、存储介质和电子装置
CN108648238B (zh) * 2018-04-25 2021-09-14 深圳市商汤科技有限公司 虚拟角色驱动方法及装置、电子设备和存储介质
CN109224442B (zh) * 2018-09-03 2021-06-11 腾讯科技(深圳)有限公司 虚拟场景的数据处理方法、装置及存储介质
CN109285211B (zh) * 2018-10-29 2023-03-31 Oppo广东移动通信有限公司 画面渲染方法、装置、终端及存储介质
CN109544663B (zh) * 2018-11-09 2023-01-06 腾讯科技(深圳)有限公司 应用程序的虚拟场景识别与交互键位匹配方法及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110251896A1 (en) * 2010-04-09 2011-10-13 Affine Systems, Inc. Systems and methods for matching an advertisement to a video
CN108550190A (zh) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 增强现实数据处理方法、装置、计算机设备和存储介质
KR20180091794A (ko) * 2018-08-07 2018-08-16 브루비스 멀티 미디어 크리에이티브 컴퍼니 리미티드 투사 맵핑 방법

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU, JING: " Research on Fast Scene Classification Based on Scene Gist", CHINA MASTER'S THESES FULL-TEXT DATABASE, INFORMATION SCIENCE AND TECHNOLOGY, no. 8, 15 August 2013 (2013-08-15), pages 1 - 57, XP055830559, ISSN: 1674-0246 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112657201A (zh) * 2020-12-23 2021-04-16 上海米哈游天命科技有限公司 一种角色臂长确定方法、装置、设备及存储介质
CN112657201B (zh) * 2020-12-23 2023-03-07 上海米哈游天命科技有限公司 一种角色臂长确定方法、装置、设备及存储介质

Also Published As

Publication number Publication date
EP3905204A1 (en) 2021-11-03
EP3905204A4 (en) 2022-03-09
US20210354037A1 (en) 2021-11-18
CN111598976A (zh) 2020-08-28
CN111598976B (zh) 2023-08-22
US11918900B2 (en) 2024-03-05

Similar Documents

Publication Publication Date Title
WO2020156487A1 (zh) 场景识别方法及装置、终端、存储介质
US20230222155A1 (en) Prioritized device actions triggered by device scan data
US20200089661A1 (en) System and method for providing augmented reality challenges
CN109032793B (zh) 资源配置的方法、装置、终端及存储介质
CN111190926B (zh) 资源缓存方法、装置、设备及存储介质
JP2023011794A (ja) 音源決定方法並びにその装置、コンピュータプログラム、及び電子装置
US11887229B2 (en) Method and system for populating a digital environment using a semantic map
US11951390B2 (en) Method and system for incremental topological update within a data flow graph in gaming
US11351460B2 (en) System and method for creating personalized game experiences
US20200391109A1 (en) Method and system for managing emotional relevance of objects within a story
CN107871143A (zh) 图像识别方法及装置、计算机装置和计算机可读存储介质
CN112927332A (zh) 骨骼动画更新方法、装置、设备及存储介质
CN110287767A (zh) 可防攻击的活体检测方法、装置、计算机设备及存储介质
US10535192B2 (en) System and method for generating a customized augmented reality environment to a user
CN108537149B (zh) 图像处理方法、装置、存储介质及电子设备
US11660538B2 (en) Methods and systems for game system creation
CN113577766B (zh) 对象处理方法及装置
US10614626B2 (en) System and method for providing augmented reality challenges
CN109598190A (zh) 用于动作识别的方法、装置、计算机设备及存储介质
CN110089076A (zh) 实现信息互动的方法和装置
CN112464691A (zh) 图像处理方法及装置
CN110597566A (zh) 应用处理方法、装置、存储介质及电子设备
US11863863B2 (en) System and method for frustum context aware digital asset suggestions
CN117131240B (zh) 服务推荐方法、电子设备及计算机可读存储介质
CN111068333B (zh) 基于视频的载具异常状态检测方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20748486

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020748486

Country of ref document: EP

Effective date: 20210727