CN113457163A - Region marking method, device, equipment and storage medium - Google Patents

Region marking method, device, equipment and storage medium Download PDF

Info

Publication number
CN113457163A
CN113457163A CN202110809005.9A CN202110809005A CN113457163A CN 113457163 A CN113457163 A CN 113457163A CN 202110809005 A CN202110809005 A CN 202110809005A CN 113457163 A CN113457163 A CN 113457163A
Authority
CN
China
Prior art keywords
bitmap
contour line
scene map
coordinate system
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110809005.9A
Other languages
Chinese (zh)
Other versions
CN113457163B (en
Inventor
韩天赋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110809005.9A priority Critical patent/CN113457163B/en
Publication of CN113457163A publication Critical patent/CN113457163A/en
Application granted granted Critical
Publication of CN113457163B publication Critical patent/CN113457163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the application discloses a region marking method, a region marking device and a storage medium. The method comprises the following steps: firstly, acquiring a first contour line of an area in a scene map; then mapping the first contour line to a bitmap coordinate system corresponding to the scene map to obtain a second contour line; and then, performing one-to-one region coloring in a bitmap coordinate system based on the second contour line to obtain a bitmap corresponding to the scene map. The method marks the areas in the bitmap in a one-to-one mode through colors, can reduce the operation amount and the operation complexity of the areas where the recognition targets are located, and reduces the cost of physical performance of operation equipment. For example, the area where the target is located can be accurately identified based on the correspondence between the colors and the areas only by determining the colors of the corresponding pixels in the bitmap corresponding to the scene map.

Description

Region marking method, device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for region marking.
Background
In some virtual scenarios, the area where the target is located needs to be exposed based on the application requirements. For example, in a virtual scene of a game, when a character manipulated by a player moves to a certain position, in order to improve the experience of the player in the scene or ensure the realizability of certain playing methods, information of an area where the character is located in the virtual scene, such as an area name, needs to be displayed in a game interface. As an example, in team mode, the name of the area where the player is located is shown to the side of the avatar icon of the team player, e.g., showing player a in the main town, player B in the wild, and player C in the village. Before the display, the area where the target is located needs to be identified, and one premise of identifying the area where the target is located is to accurately mark the area in the scene, so that different areas are distinguished.
An area labeling method is currently implemented based on polygonal meshes. The scheme artificially divides each area, respectively generates grids for each area, and arranges collision boxes on the grids. The crash box is provided with zone markings, with the zone markings of the crash box indicating the zone. When the area where the target is located needs to be identified, the ray is emitted downwards along the position where the target is located through the physical engine, and the area where the target is located is determined according to the area mark carried by the collision box detected by the ray. When the area marking method is used for subsequently identifying the area where the target is located, a large amount of complex operation needs to be carried out through a physical engine, and the cost on the physical performance of operation equipment is high.
Disclosure of Invention
The embodiment of the application provides a region marking method, a region marking device and a storage medium, so that the calculation amount and the calculation complexity of a region where an identification target is located are reduced, and the cost of physical performance of calculation equipment is reduced.
In view of the above, a first aspect of the present application provides a method for region labeling, including:
acquiring a first contour line of an area in a scene map;
mapping the first contour line to a bitmap coordinate system corresponding to the scene map to obtain a second contour line;
and performing one-to-one region coloring in a bitmap coordinate system based on the second contour line to obtain a bitmap corresponding to the scene map.
A second aspect of the present application provides an area marking apparatus, comprising:
the first contour line acquisition unit is used for acquiring a first contour line of an area in a scene map;
the second contour line obtaining unit is used for mapping the first contour line to a bitmap coordinate system corresponding to the scene map to obtain a second contour line;
and the coloring unit is used for performing one-to-one region coloring in a bitmap coordinate system based on the second contour line to obtain a bitmap corresponding to the scene map.
A third aspect of the present application provides a region labeling apparatus, comprising a processor and a memory:
the memory is used for storing the program codes and transmitting the program codes to the processor;
the processor is configured to perform the steps of the region indication method according to the first aspect as described above, according to instructions in the program code.
A fourth aspect of the present application provides a computer-readable storage medium for storing program code for performing the steps of the region labeling method of the first aspect.
According to the technical scheme, the embodiment of the application has the following advantages:
in the embodiment of the application, a region marking method is provided. The method comprises the steps of firstly, obtaining a first contour line of an area in a scene map; then mapping the first contour line to a bitmap coordinate system corresponding to the scene map to obtain a second contour line; and then, performing one-to-one region coloring in a bitmap coordinate system based on the second contour line to obtain a bitmap corresponding to the scene map. The method marks the areas in the bitmap in a one-to-one mode through colors, can reduce the operation amount and the operation complexity of the areas where the recognition targets are located, and reduces the cost of physical performance of operation equipment. For example, the area where the target is located can be accurately identified based on the correspondence between the colors and the areas only by determining the colors of the corresponding pixels in the bitmap corresponding to the scene map.
Drawings
Fig. 1 is a flowchart of a region labeling method according to an embodiment of the present disclosure;
fig. 2 is a schematic view of a scene map according to an embodiment of the present application;
fig. 3A is a schematic diagram of a first contour line drawn for an area in a scene map according to an embodiment of the present application;
FIG. 3B is a schematic diagram of a first contour of an off-scene map;
fig. 4 is a schematic diagram of a second contour line obtained based on the first contour line mapping shown in fig. 3B according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a bitmap after region shading according to an embodiment of the present application;
fig. 6A is a flowchart of another area indication method according to an embodiment of the present disclosure;
fig. 6B is a flowchart of another area indication method according to an embodiment of the present disclosure;
fig. 7 is a flowchart of another area indication method according to an embodiment of the present disclosure;
fig. 8A is a schematic structural diagram of an area indicating device according to an embodiment of the present disclosure;
fig. 8B is a schematic structural diagram of another area indication apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a server for region identification according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a terminal device for area indication according to an embodiment of the present application.
Detailed Description
Currently, marking a region in a virtual scene for identification usually requires establishing a grid corresponding to the region after dividing the region. Crash boxes are provided in the grid, with zones being indicated by zone markings carried on the crash boxes. When identifying the area where the target is located, a physical engine is often required to emit rays to detect the crash box with the rays, and then the area marker is identified. When the scheme is used for identifying the area, the calculation amount is large, the calculation is complex, and the physical performance overhead of the calculation equipment is large. Therefore, the operation and the display of the virtual scene may be affected, for example, the scene picture is stuck, part of data in the scene is displayed abnormally, and the like.
In order to solve the above problems, a new method of marking a region is provided in the present application. The application specifically provides a region marking method, a region marking device and a storage medium. Extracting a region outline from the scene map, converting the region outline to a bitmap coordinate system, and coloring the region based on the outline on the bitmap coordinate system to enable the region and the color to have a one-to-one correspondence relationship. The method and the device uniquely mark the area in the scene through the color, so that the area where the target is located is conveniently identified subsequently, the identification of the area is more convenient, simple and easy to realize, and the cost of the physical performance of the equipment is saved.
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
To facilitate understanding of the technical solution of the present application, please refer to fig. 1, where fig. 1 is a flowchart of a region labeling method according to an embodiment of the present application. The region labeling method shown in fig. 1 includes:
s101: a first contour line of an area in a scene map is obtained.
To facilitate understanding of the present solution, a description of a scene map is developed with a game.
In order to show diversified playing methods in a game, a plurality of scene maps are generally arranged, and part of the scene maps need to be gradually and openly shown along with the increase of the character level of a player. For example, the scene maps 1 to 3 can be displayed in an open manner after the player character is established, the scene maps 4 to 5 can be displayed after the player character is upgraded to 20 levels, and the scene maps 6 to 7 can be displayed after the player character is upgraded to 30 levels. Generally, when a player experiences a game, only one scene map can be shown in a game interface at the same time. The scene map can be three-dimensional or two-dimensional, and the dimension of the scene map depends on the game.
In this step, contour lines of regions in the scene map need to be acquired, and for convenience of distinguishing, the contour lines of the regions acquired according to the scene map in this step are defined as first contour lines. For convenience of processing, the three-dimensional scene map can be converted into a two-dimensional scene map, and then the first contour line is acquired. For example, a first contour line is obtained from an overhead view of a three-dimensional scene map.
The manner of obtaining the first contour line includes various ways. For example, the first contour line of the generation region may be drawn by a graph of some rules, which may include, but is not limited to, at least one of: circular, rectangular or triangular. For an irregular area that may exist in the scene map, drawing the first contour line with a regular graphic may have a problem of insufficient drawing accuracy. Professional game art designers can also finish the drawing work of the first contour line, and then the first contour line of the area with high accuracy is obtained. However, the method has higher professional dependence on a drawing person and consumes more time and energy.
In order to improve the drawing accuracy of the first contour line, reduce the professional dependency on drawing personnel and save the drawing time and energy, the technical scheme of the application provides a drawing scheme of the first contour line. Specifically, point-to-point drawing can be performed along the contour of the area in the scene map through the line segment drawing control, so that a first contour line formed by surrounding a plurality of line segments is formed. Here, the line segment drawing control may draw a straight line segment and/or a curved line segment. The line segment drawing control is used as a tool for drawing the first contour line, and the first contour line can be obtained by extracting data generated by the line segment drawing control in the scene map. The first contour line is obtained through the line segment drawing control, professional dependence on drawing personnel is effectively reduced, and personnel with low art skill can draw the first contour line of the area in the scene map by means of the line segment drawing control. Effectively save the time and the energy of drawing, promoted drawing efficiency. In addition, the application range of the line segment drawing control is wider, and the outline drawing of the irregular-shaped area can be realized. Compared with a regular graph drawing scheme, the accuracy of the drawn first contour line is improved.
Fig. 2 is a schematic view of a scene map provided in an embodiment of the present application. The first contour line shown in fig. 3A is obtained by executing S101. FIG. 3B clearly shows a first contour line of the area in the scene map shown in FIG. 2, apart from the original scene map.
S102: and mapping the first contour line to a bitmap coordinate system corresponding to the scene map to obtain a second contour line.
In the technical scheme of the application, in order to mark the areas in the scene map, a bitmap corresponding to the scene map is generated, and different areas in the scene map are marked by different colors in the bitmap. Such a bitmap shows the different areas in a simple, intuitive way. The first contour line formed in S101 is in the scene map coordinate system, and in order to form a desired bitmap capable of marking different areas with different colors, the first contour line needs to be converted from the scene map coordinate system into the bitmap coordinate system. For the sake of convenience of distinction, the region contour line converted in the bitmap coordinate system is defined as the second contour line here.
The composition data of the first contour line of each region includes: and coordinates of a plurality of vertexes on the first contour line in the scene map coordinate system and curvature or slope and other parameters of a line segment connected between adjacent vertexes. For the sake of convenience of distinction, the coordinates of the vertices on the first contour line in the scene map coordinate system are defined herein as first coordinates.
In order to realize the coordinate conversion from the first contour line to the second contour line, firstly, a bitmap coordinate system corresponding to the scene map can be established according to the size of the scene map and the preset division precision. For example, the number of horizontal pixels in a bitmap coordinate system is obtained according to the horizontal size and preset division precision of the scene map; acquiring the number of longitudinal pixels in a bitmap coordinate system according to the longitudinal size and preset division precision of the scene map; and establishing a bitmap coordinate system by taking one vertex of the scene map (such as the vertex at the upper left corner of the scene map) as an origin and the number of horizontal pixels and the number of vertical pixels. As an example, the lateral size of the scene map is SceneSizeX (unit: cm), the longitudinal size of the scene map is SceneSizeY (unit: cm), and the preset division precision is P (unit: cm/single pixel). The larger P, the larger the range in which a single pixel in the bitmap coordinate system corresponds to in the scene map, the fewer the number of pixels in the bitmap coordinate system. And dividing the SceneSizeX and the P to obtain the horizontal pixel number Sizex (unit: one) corresponding to the scene map in the bitmap coordinate system, and dividing the SceneSizeY and the P to obtain the vertical pixel number Sizey (unit: one) corresponding to the scene map in the bitmap coordinate system.
Through the above operations, a bitmap coordinate system corresponding to the scene map is established. Then, a first coordinate of the vertex in the first contour line in a scene map coordinate system is extracted. When the bitmap coordinate system is established, the conversion relation of the scene map coordinate system to the bitmap coordinate system is established based on the size of the scene map and the preset division precision. Based on the conversion relation, the first coordinate can be mapped to a bitmap coordinate system from a scene map coordinate system, and the coordinate of the vertex in the bitmap coordinate system is obtained. For ease of distinction, the coordinates that map the vertices on the first contour to the bitmap coordinate system are defined as the second coordinates of the vertices.
The above operation determines the second coordinate of the vertex in the bitmap coordinate system. In order to form a second contour line matched with the first contour line in the bitmap coordinate system, the pixel coordinates of the line segments between adjacent vertexes in the bitmap coordinate system are determined by an interpolation method. Specifically, the coordinates of the pixels except the two adjacent vertices in the line segment whose two end points are the adjacent vertices in the bitmap coordinate system may be obtained by a numerical Differential algorithm (DDA) based on the second coordinates of the adjacent vertices in the first contour line. DDA is an algorithm in computer graphics that can quickly interpolate variables in the time interval between the start and end points. DDA can be used for rasterization of lines, triangles and polygons. The interpolation of the DDA algorithm to obtain the point coordinates belongs to a relatively mature technology, and is not described herein again.
And (of course, the coordinate of the two end points and the coordinates of the rest pixels on the same line segment are used for constructing a coordinate set of the line segment (the coordinate set can also be in a coordinate sequence form). Because the first contour line comprises a plurality of line segments, correspondingly, the second contour line also comprises a plurality of line segments, in the application, the second contour line can be obtained according to the coordinate set corresponding to each line segment in the first contour line. For example, the coordinate sets of the line segments in the first contour corresponding to the same region are collected into a total set, the vertex coordinates that overlap in the set are removed (or not removed), and finally the second contour in the bitmap coordinate system is formed from the coordinates in the total set.
The second contour line is formed to facilitate one-to-one coloring of a region subsequently divided based on the second contour line. Fig. 4 is a schematic diagram of a second contour line obtained based on the first contour line mapping shown in fig. 3B according to an embodiment of the present application. As shown in fig. 4 and 3B, the second contour line and the first contour line have the same or nearly the same shape. It should be noted that the smaller the predetermined division accuracy P, the closer the shape of the second contour line to the first contour line, because the larger the number of pixels of the bitmap coordinate system at this time, the finer the interpolation between adjacent vertices.
S103: and performing one-to-one region coloring in a bitmap coordinate system based on the second contour line to obtain a bitmap corresponding to the scene map.
It is understood that the second contour line obtained by S102 demarcates a plurality of regions in the bitmap coordinate system. These areas are colored by different colors. For example, 4 regions are defined in total, and are colored red, gray, green, and blue, respectively. In the bitmap formed by coloring, different regions have uniquely corresponding colors, so that the regions are labeled with colors in the bitmap.
In an example implementation manner of this step, a corresponding image boundary line of the scene map in the bitmap coordinate system may be determined first; determining a plurality of areas divided by the image boundary line and the second contour line in the bitmap coordinate system; different areas of the plurality of areas are then filled with different colors, respectively. After this coloring, the color of the second contour line does not belong to any region, and it is likely that the boundary line between different regions interferes with the identification of the region where the object is located. Thus, the second contour can be identified from the bitmap coordinate system and removed, resulting in a bitmap with only region color values. Further, the pixels occupied by the second contour line in the bitmap coordinate system may be filled with the color of any one of the regions divided by the second contour line. For example, assuming that the inner region of one of the second contour lines is filled with the first color and the outer region is filled with the second color, the second contour line may be filled with the first color or the second color. Preferably, it is filled in the color of the inner region. Fig. 5 is a schematic diagram of a bitmap after region coloring according to an embodiment of the present application.
In practical applications, the partition data table may be formed randomly or in a preset manner, and then the region coloring may be configured according to the partition data table. The partition data table includes a first mapping relationship between color information of the region and region description information. And filling the color of the area with certain description information into the color corresponding to the area in the coloring scheme according to the coloring scheme. In addition, after the color of the area is filled in a random or preset mode, the partition data table can be generated based on the first mapping relation between the color information and the description information of the area filled with the color. Here, the description information of the region may include, but is not limited to, at least one of: the rank, type, name, etc. of the region. Wherein the name of the region comprises at least one of: chinese full name, English full name, Chinese short name or English short name, etc. As an example, the types of regions may be divided into: dangerous type and safe type. The types of regions can be divided into: primary, secondary and tertiary. Wherein, the first level has the highest corresponding risk and the third level has the lowest corresponding risk. Of course, the above description is only an example, and in practical applications, the description information of the area may have various optional contents and meanings based on the game setting, and is not limited herein.
In the above embodiments, a method for region marking is provided. The method comprises the steps of firstly, obtaining a first contour line of an area in a scene map; then mapping the first contour line to a bitmap coordinate system corresponding to the scene map to obtain a second contour line; and then, performing one-to-one region coloring in a bitmap coordinate system based on the second contour line to obtain a bitmap corresponding to the scene map. The method marks the areas in the bitmap in a one-to-one mode through colors, can reduce the operation amount and the operation complexity of the areas where the recognition targets are located, and reduces the cost of physical performance of operation equipment. For example, the area where the target is located can be accurately identified based on the correspondence between the colors and the areas only by determining the colors of the corresponding pixels in the bitmap corresponding to the scene map.
The region marking method provided by the foregoing embodiment obtains a bitmap, and different regions are marked in the bitmap by colors. The following describes how the bitmap data is used to determine the area where the target is located, in conjunction with an embodiment.
Fig. 6A is another area indication method according to an embodiment of the present disclosure. As shown in fig. 6A, the area marking method includes:
s601: a first contour line of an area in a scene map is obtained.
S602: and mapping the first contour line to a bitmap coordinate system corresponding to the scene map to obtain a second contour line.
S603: and performing one-to-one region coloring in a bitmap coordinate system based on the second contour line to obtain a bitmap corresponding to the scene map.
The above S601-S603 are substantially the same as the implementation manners of S101-S103 in the foregoing embodiment, and therefore, details of S601-S603 are not repeated, and please refer to the description and illustration of the foregoing embodiment specifically.
In order to facilitate the subsequent identification of the specific area of the target object in the scene map in combination with the bitmap, the following operations S604-S605 also need to be performed in this embodiment of the application.
S604: storing the data of the bitmap to a hard disk; the data of the bitmap includes color information of the pixels in the bitmap.
The color information of each pixel in the bitmap is stored in the hard disk for use when the scene map of the bitmap is loaded in the game.
S605: and generating a partition data table according to a first mapping relation between the color information of the region and the description information of the region in the bitmap.
And the first mapping relation in the partition mapping table is used for searching for value, namely the description information of the region by taking the color information as key after the color information of the target object is determined.
It should be noted that S605 may be executed before S604 or simultaneously with S604. After the execution of S604 and S605 is completed, the loading stage and the displaying stage of the scene map can be dealt with. In the presentation phase, a need may be faced to identify the area in which the target object is located. Typically, the loading phase of the scene map occurs between jumps from one scene map to another.
S606: and in the loading stage of the scene map, loading bitmap data from the hard disk and storing the bitmap data into the memory.
In S604, the data of the bitmap is stored in the hard disk. Memory is needed during the scene map display stage. Therefore, the data of the bitmap is loaded from the hard disk and stored in the memory for use in the presentation phase.
S607: and in the display stage of the scene map, acquiring the position information of the target object in the scene map.
Taking a game as an example, the target object in the scene map in the game may be a character manipulated by the player, and the character may move along with the manipulation action of the player, so there is a need to determine the area where the character is located. The character manipulated by the player may be a character, an animal, or the like, depending on the game setting, and the target object is not limited herein.
Here, the position information of the target object on the scene map may be represented as a position with respect to the origin of the coordinate system of the scene map. For example, the origin in the scene map coordinate system is the top left vertex of the scene map. The position information of the target object is (a, b), namely, the position of the target object is located at a cm on the right side of the top left vertex in the transverse direction and at a cm on the lower side of the top left vertex in the longitudinal direction.
S608: and determining a corresponding target pixel of the target object in the bitmap through coordinate conversion according to the position information.
In order to determine the area of the target object in the scene map in combination with the bitmap data, it is also necessary to convert the position information of the target object from the scene map coordinate system into the bitmap coordinate system. Thereby determining the corresponding pixel of the target object in the bitmap, i.e., the target pixel. The coordinate transformation relationship may be determined by the size of the scene map and the preset division precision, which is not described herein. It will be appreciated that the target pixel should coincide with the color information in the bitmap of the region in which the target object is located.
S609: and reading out the color information of the target pixel from the memory.
Since the bitmap data of the scene map is stored in the memory in S606, which includes the color information of each pixel point in the bitmap, the color information of the target pixel can be directly read from the memory according to the coordinates of the target pixel.
S610: and based on the color information of the target pixel, obtaining the description information of the region where the target object is located from the partition data table in an indexing way.
In S605, a partition data table is generated, and when the color information of the target pixel is obtained, the color information of the region where the target object is located can be determined. Based on this color information, the description information of the region where the target object is located can be directly retrieved from the partition data table. I.e. the area in which the target object is identified.
S611: and displaying the description information of the region where the target object is located.
Next, in order to make a player who handles the target object or a related player (e.g., teammate, captain) know the area where the target object is located, description information of the area where the target object is located needs to be presented. The specific implementation includes a plurality of display modes. For example, the area description information may be presented in the basic information of the target object, and the area description information may be presented in the vicinity of the avatar of the target object, which is the name of the target object. The specific display manner of the area description information is not limited herein.
The above embodiment helps to accurately identify the area where the target object is located and to display the area description information in the display stage of the scene map based on the data of the scene map bitmap and the partition data mapping table which are formed in advance. Therefore, convenience of a user (such as a game player) in operating the target object in the scene map is improved, and user experience is improved. In addition, the game is taken as an example, and the realizability of certain playing methods, such as team battle and the like, is guaranteed. The most central point of the process of identifying the area where the target object is located is to obtain the position information of the target object in the scene map, which is simple and easy to operate. Area identification can then be achieved based on the previous area indication results (i.e., bitmaps). Compared with the mode of establishing the area grid, constructing the collision box and shooting rays on the grid along the target object through the physical engine, the required computation amount and computation complexity are obviously reduced, and therefore the cost of the physical performance of the computing equipment is saved.
In practical applications, when the bitmap is stored in S604, the bitmap can be stored as a 24-bit color bitmap. The 24-bit color bitmap can meet the filling of different colors in a large number of areas, and ensures that the colors are not compressed, so that the problem of consistent colors of different areas cannot occur in the bitmap. The foregoing embodiment S606 mentions that, in the loading stage of the scene map, the bitmap data is loaded from the hard disk and stored in the memory. Since the bitmap is a 24-bit color bitmap, the representation of color information needs to depend on a large amount of data. Specifically, the color information of one region is represented in the following manner: an 8 bit binary digit by an 8 bit binary digit. That is, the 24-bit color bitmap can specifically ensure accurate representation of 256 × 256 × 256 color information. It may be that in practical applications, a scene map often contains no more than 1000 regions, and therefore the data amount of pixel color information in 24-bit color bitmap data is far more than necessary. The bitmap data volume is high, and the performance of scene map display is influenced when the bitmap data volume is stored in a memory. Taking a game as an example, the display efficiency and the integrity of the scene map when the player experiences the game are influenced. It can be seen that the memory performance is very important for the display and use of the scene map. It is necessary to increase the amount of remaining space in the memory. Therefore, the application provides another region marking method, and bitmap data in a memory is compressed and stored in the loading stage of the scene map. The following examples are given by way of illustration.
Referring to fig. 6B, a further area indication method according to an embodiment of the present disclosure is shown. In this method, as shown in fig. 6B, it includes:
the implementation of S601 '-S605' is identical to S601-S605, and is not described here.
S606': in the loading phase of the scene map, the color information of the pixels in the 24-bit color bitmap is loaded from the hard disk.
S607': the regions in the 24-bit color bitmap are assigned region identification codes.
In an alternative implementation, a 10-ary area identification code (area ID) may be assigned, e.g., 1, 2, 3, etc. In an alternative implementation, a binary region identification code, such as 0001, 0010, 0011, etc., may be assigned.
In order to store the amount of data to the memory, the region identification code may be reallocated after determining the minimum bit width of the region identification code. For example: the total number M of regions contained in the 24-bit color bitmap may be determined first; and determining the minimum bit width N of the area identification code through a logarithmic function taking 2 as a base according to the total number M. N is the logarithm of the base 2M, and the expression is as follows:
N=[log2M]
after the minimum bit width N is obtained, the area identification code is generated according to the minimum bit width N, unnecessary bits are not generated, and the data size to be stored is saved. Finally, the area identification codes are allocated to the areas in the 24-bit color bitmap one by one.
For example, if there are 63 regions, i.e., M63, to which region identification codes need to be respectively assigned, N6 is obtained according to the above expression. I.e., the area id code represented in binary, it is necessary to generate the area id code with the minimum bit width of 6, so as to distinguish and represent 63 different area id codes.
S608': a second mapping of the region identification codes to the color information of the regions in the 24-bit color bitmap is constructed.
As mentioned above, the region identification code of a small data amount is allocated to the region, and each region has corresponding color information in the bitmap before, so that a mapping relationship between the region identification code and the color information, referred to as a second mapping relationship, can be established based on the association of the regions.
And S609': and compressing the color information of the pixels in the 24-bit color bitmap into corresponding area identification codes based on the second mapping relation and storing the area identification codes in the memory.
In the embodiment described in fig. 6A, the color information is stored directly to the memory. In the embodiment, the color information is compressed into the area identification code based on the second mapping relationship and stored. Therefore, the data amount required to be stored in the memory is effectively reduced. In addition, the second mapping relationship may also be stored in the memory, and the data size of the second mapping relationship itself is much smaller than the original complete color information to be stored.
S610': and in the display stage of the scene map, acquiring the position information of the target object in the scene map.
S611': and determining a corresponding target pixel of the target object in the bitmap through coordinate conversion according to the position information.
S610 '-S611' is identical to the previous embodiments S607-S608, and is not described herein.
S612': and reading the area identification code corresponding to the target pixel from the memory.
The color information of the target pixel is the same as that of the region where the target object is located, and therefore, the region identification code of the target pixel is the same as that of the region.
S613': and obtaining the color information of the target pixel according to the area identification code corresponding to the target pixel and the second mapping relation.
Since the area identification code of the target pixel (i.e., the area identification code of the area where the target object is located) has already been obtained in S612', the color information of the target pixel (the color information of the area where the target object is located) can be directly obtained based on the second mapping relationship.
And S614': and based on the color information of the target pixel, obtaining the description information of the region where the target object is located from the partition data table in an indexing way.
And S615': and displaying the description information of the region where the target object is located.
S614 '-S615' is identical to the previous embodiments S610-S611 and will not be described herein.
In the embodiment of the application, the color information of the pixels in the bitmap data is compressed into the area identification code with small data volume, and the area identification code is stored in the early memory, so that the occupation amount of the memory is effectively reduced. Thus, presentation of the scene map is facilitated. Taking a game as an example, the method is beneficial to improving the game experience of a player and reducing the problems of picture stuttering, incomplete display, low display speed and the like caused by poor memory performance in the game.
In the prior art, to realize region labeling, a grid needs to be established for each region. The grid data volume is large, and the storage overhead during operation is large. By storing bitmap data (even compressing color information of pixels in the bitmap), the memory overhead is effectively reduced. In addition, in the prior art, if the scene map is modified, the grid data needs to be regenerated again, so that the modification amount is large, and incremental modification is inconvenient. In the method, if the scene map is modified, only the contour line and the coloring modified area need to be redrawn on the basis of the bitmap generated in the previous time, so that the modification amount is smaller and the method is more convenient and faster. A further region labeling method provided in the present disclosure is described below with reference to fig. 7.
Fig. 7 is a flowchart illustrating a further area marking method according to an embodiment of the present application. Referring to fig. 7, the method includes:
the implementation manners of S701-S705 are the same as those of S601-S605 in the foregoing embodiment, and are not described herein again.
S706: and judging whether the scene map is modified or not, if so, entering S707, and if not, entering S711.
S707: the first contour line is updated based on the modified scene map.
S708: and mapping the updated first contour line into the bitmap to obtain an updated second contour line.
S709: and in the bitmap, performing one-to-one coloring on the change area based on the updated second contour line to obtain a new bitmap.
S710: and updating the color information of the pixels in the new bitmap into the hard disk, updating the partition data table according to the new bitmap, and entering S706.
S711: waiting for entering the loading phase of the scene map.
The method updates the first contour line based on the changed scene map and correspondingly updates the second contour line. On the basis of the initial bitmap, the changed area is determined based on the difference between the updated second contour line and the previous second contour line, and the changed area is one-to-one colored, so that the initial bitmap is modified simply and conveniently in cooperation with the change of the scene map. The method supports multiple modification iterations and is high in flexibility.
Based on the area marking method provided by the foregoing embodiment, correspondingly, the present application further provides an area marking device. The following description is made in conjunction with the embodiments and the accompanying drawings.
Referring to fig. 8, which is a schematic structural diagram of an area marking device 800 according to an embodiment of the present application, as shown in fig. 8, the area marking device 800 includes:
a first contour line obtaining unit 801, configured to obtain a first contour line of an area in a scene map;
a second contour line obtaining unit 802, configured to map the first contour line into a bitmap coordinate system corresponding to the scene map, so as to obtain a second contour line;
and a rendering unit 803, configured to perform one-to-one region rendering in a bitmap coordinate system based on the second contour line, so as to obtain a bitmap corresponding to the scene map.
Optionally, the second contour line obtaining unit 802 includes:
the coordinate system establishing subunit is used for establishing a bitmap coordinate system corresponding to the scene map according to the size of the scene map and the preset division precision;
the first coordinate acquisition subunit is used for extracting a first coordinate of a vertex in the first contour line in a scene map coordinate system;
the second coordinate acquisition subunit is used for mapping the first coordinate from the scene map coordinate system to the bitmap coordinate system to obtain a second coordinate of the vertex in the bitmap coordinate system;
the interpolation subunit is used for obtaining a coordinate set of a line segment which takes the adjacent vertex as two end points in the bitmap coordinate system by an interpolation method based on the second coordinate of the adjacent vertex in the first contour line; the coordinate set corresponds to a line segment which takes the adjacent vertexes as two end points in the first contour line;
and the second contour line obtaining subunit is used for obtaining a second contour line according to the coordinate set corresponding to each line segment in the first contour line.
The present application further provides another area-marking device, see fig. 8B, which is a schematic structural diagram of another area-marking device 800'. As shown in fig. 8B, the region marking apparatus 800' not only includes the first contour line obtaining unit 801, the second contour line obtaining unit 802 and the coloring unit 803 in the apparatus 800 shown in fig. 8A, but also includes:
a first storage unit 804 for storing data of the bitmap to the hard disk; the data of the bitmap includes color information of pixels in the bitmap;
a partition data table generating unit 805 configured to generate a partition data table according to a first mapping relationship between color information of a region and description information of the region in a bitmap;
a loading unit 806, configured to load bitmap data from the hard disk in a loading phase of the scene map;
a second storage unit 807, configured to store the bitmap data loaded by the loading unit 806 in the memory;
a position information obtaining unit 808, configured to obtain, at a stage of displaying the scene map, position information of the target object on the scene map;
a pixel determination unit 809 for determining a corresponding target pixel of the target object in the bitmap through coordinate conversion according to the position information;
a color information reading unit 810, configured to read color information of a target pixel from a memory;
a description information indexing unit 811 for indexing the description information of the region where the target object is located from the partition data table based on the color information of the target pixel;
the display unit 812 is configured to display description information of an area where the target object is located.
Optionally, the bitmap is a 24-bit color bitmap; a loading unit 806, specifically configured to load color information of pixels in the 24-bit color bitmap from the hard disk;
the second storage unit 807 includes:
the area identification code sub-unit is used for distributing area identification codes for areas in the 24-bit color bitmap;
the mapping relation construction subunit is used for constructing a second mapping relation between the area identification code and the color information of the area in the 24-bit color bitmap;
the compression storage subunit is used for compressing the color information of the pixels in the 24-bit color bitmap into corresponding area identification codes based on a second mapping relation and storing the corresponding area identification codes in the memory;
the color information reading unit 810 includes:
the area identification code reading subunit is used for reading the area identification code corresponding to the target pixel from the memory;
and the color information determining subunit is used for obtaining the color information of the target pixel according to the area identification code corresponding to the target pixel and the second mapping relation.
A region identifier subunit, specifically configured to determine a total number of regions included in the 24-bit color bitmap; determining the minimum bit width of the area identification code through a logarithmic function with the base 2 according to the total number; generating an area identification code according to the minimum bit width; the area identification codes are allocated to the areas in the 24-bit color bitmap one-to-one.
Optionally, the coloring unit 803, comprises:
the boundary line determining subunit is used for determining the corresponding image boundary line of the scene map in the bitmap coordinate system;
a region determining subunit configured to determine, in the bitmap coordinate system, a plurality of regions divided by the image boundary line and the second contour line;
a first coloring subunit, configured to fill different colors in different areas of the plurality of areas, respectively;
and the second coloring subunit is used for filling the second contour line into the color of any one of the areas divided by the second contour line.
Optionally, the first contour line obtaining unit 801 is specifically configured to obtain a first contour line drawn by the line segment drawing control for an area in the scene map.
Optionally, the area labeling apparatus further comprises: an update unit; the updating unit is used for updating the first contour line based on the modified scene map; mapping the updated first contour line into a bitmap to obtain an updated second contour line; and in the bitmap, performing one-to-one coloring on the change area based on the updated second contour line to obtain a new bitmap.
Optionally, the coordinate system establishing subunit is configured to obtain a number of horizontal pixels in the bitmap coordinate system according to the horizontal size of the scene map and a preset division precision; acquiring the number of longitudinal pixels in a bitmap coordinate system according to the longitudinal size and preset division precision of the scene map; and establishing a bitmap coordinate system by taking one vertex of the scene map as an origin and the number of horizontal pixels and the number of vertical pixels.
Based on the area marking method and device provided by the foregoing embodiments, correspondingly, the application further provides area marking equipment. The device may be implemented as a server or a terminal device.
Fig. 9 is a schematic diagram of a server structure for region indication according to an embodiment of the present application, where the server 900 may have a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 922 (e.g., one or more processors) and a memory 932, and one or more storage media 930 (e.g., one or more mass storage devices) storing an application 942 or data 944. Memory 932 and storage media 930 can be, among other things, transient storage or persistent storage. The program stored on the storage medium 930 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, a central processor 922 may be provided in communication with the storage medium 930 to execute a series of instruction operations in the storage medium 930 on the server 900.
The server 900 may also include one or more power supplies 926, one or more wired or wireless network interfaces 950, one or more input-output interfaces 958, and/or one or more operating systems 941, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps performed by the server in the above embodiments may be based on the server structure shown in fig. 9.
The CPU 922 is configured to execute the following steps:
acquiring a first contour line of an area in a scene map;
mapping the first contour line to a bitmap coordinate system corresponding to the scene map to obtain a second contour line;
and performing one-to-one region coloring in a bitmap coordinate system based on the second contour line to obtain a bitmap corresponding to the scene map.
Another area indication apparatus is provided in the embodiment of the present application, as shown in fig. 10, for convenience of description, only the portion related to the embodiment of the present application is shown, and details of the specific technology are not disclosed, please refer to the method portion of the embodiment of the present application. The terminal may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA, abbreviated as "Personal Digital Assistant"), a Sales terminal (POS, abbreviated as "Point of Sales"), a vehicle-mounted computer, etc., and the terminal is taken as a mobile phone as an example:
fig. 10 shows a schematic structural diagram of a terminal device for region indication provided in an embodiment of the present application. If the terminal device is a mobile phone, referring to fig. 10, the mobile phone includes: radio Frequency (RF) circuit 1010, memory 1020, input unit 1030, display unit 1040, sensor 1050, audio circuit 1060, wireless fidelity (WiFi) module 1070, processor 1080, and power source 1090. Those skilled in the art will appreciate that the handset configuration shown in fig. 10 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 10:
RF circuit 1010 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for processing downlink information of a base station after receiving the downlink information to processor 1080; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 1010 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (Low Noise Amplifier; LNA), a duplexer, and the like. In addition, the RF circuitry 1010 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail, Short message Service (Short SMS), and so on.
The memory 1020 can be used for storing software programs and modules, and the processor 1080 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 1020. The memory 1020 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1020 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1030 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1030 may include a touch panel 1031 and other input devices 1032. The touch panel 1031, also referred to as a touch screen, may collect touch operations by a user (e.g., operations by a user on or near the touch panel 1031 using any suitable object or accessory such as a finger, a stylus, etc.) and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1031 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1080, and can receive and execute commands sent by the processor 1080. In addition, the touch panel 1031 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1030 may include other input devices 1032 in addition to the touch panel 1031. In particular, other input devices 1032 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, a joystick, or the like.
The display unit 1040 may be used to display information input by a user or information provided to the user and various menus of the cellular phone. The Display unit 1040 may include a Display panel 1041, and optionally, the Display panel 1041 may be configured by using a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1031 can cover the display panel 1041, and when the touch panel 1031 detects a touch operation on or near the touch panel 1031, the touch operation is transmitted to the processor 1080 to determine the type of the touch event, and then the processor 1080 provides a corresponding visual output on the display panel 1041 according to the type of the touch event. Although in fig. 10, the touch panel 1031 and the display panel 1041 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1031 and the display panel 1041 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1050, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1041 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 1060, speaker 1061, microphone 1062 may provide an audio interface between the user and the handset. The audio circuit 1060 can transmit the electrical signal converted from the received audio data to the speaker 1061, and the electrical signal is converted into a sound signal by the speaker 1061 and output; on the other hand, the microphone 1062 converts the collected sound signal into an electrical signal, which is received by the audio circuit 1060 and converted into audio data, which is then processed by the audio data output processor 1080 and then sent to, for example, another cellular phone via the RF circuit 1010, or output to the memory 1020 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help the user to send and receive e-mail, browse web pages, access streaming media, etc. through the WiFi module 1070, which provides wireless broadband internet access for the user. Although fig. 10 shows the WiFi module 1070, it is understood that it does not belong to the essential constitution of the handset, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1080 is a control center of the mobile phone, connects various parts of the whole mobile phone by using various interfaces and lines, and executes various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1020 and calling data stored in the memory 1020, thereby integrally monitoring the mobile phone. Optionally, processor 1080 may include one or more processing units; preferably, the processor 1080 may integrate an application processor, which handles primarily the operating system, user interfaces, applications, etc., and a modem processor, which handles primarily the wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1080.
The handset also includes a power source 1090 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 1080 via a power management system to manage charging, discharging, and power consumption via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the embodiment of the present application, the processor 1080 included in the terminal further has the following functions:
acquiring a first contour line of an area in a scene map;
mapping the first contour line to a bitmap coordinate system corresponding to the scene map to obtain a second contour line;
and performing one-to-one region coloring in a bitmap coordinate system based on the second contour line to obtain a bitmap corresponding to the scene map.
The embodiment of the present application further provides a computer-readable storage medium for storing a program code, where the program code is configured to execute any one implementation of the area identification method described in the foregoing embodiments.
The present application further provides a computer program product including instructions, which when run on a computer, cause the computer to perform any one of the embodiments of a region labeling method described in the foregoing embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A method for region labeling, comprising:
acquiring a first contour line of an area in a scene map;
mapping the first contour line to a bitmap coordinate system corresponding to the scene map to obtain a second contour line;
and performing one-to-one region coloring in the bitmap coordinate system based on the second contour line to obtain a bitmap corresponding to the scene map.
2. The method of claim 1, wherein mapping the first contour line into a bitmap coordinate system corresponding to the scene map to obtain a second contour line comprises:
establishing a bitmap coordinate system corresponding to the scene map according to the size of the scene map and preset division precision;
extracting a first coordinate of a vertex in the first contour line in a scene map coordinate system;
mapping the first coordinate from the scene map coordinate system to the bitmap coordinate system to obtain a second coordinate of the vertex in the bitmap coordinate system;
obtaining a coordinate set of a line segment which takes the adjacent vertex as two end points in the bitmap coordinate system by an interpolation method based on the second coordinate of the adjacent vertex in the first contour line; the coordinate set corresponds to a line segment which takes the adjacent vertexes as two end points in the first contour line;
and obtaining the second contour line according to the coordinate set corresponding to each line segment in the first contour line.
3. The method of claim 1 or 2, further comprising:
storing the data of the bitmap to a hard disk; the data of the bitmap comprises color information of pixels in the bitmap;
generating a partition data table according to a first mapping relation between the color information of the region and the description information of the region in the bitmap;
in the loading stage of the scene map, loading the data of the bitmap from the hard disk and storing the data of the bitmap into a memory;
in the display stage of the scene map, acquiring the position information of a target object in the scene map;
determining a corresponding target pixel of the target object in the bitmap through coordinate conversion according to the position information;
reading out the color information of the target pixel from the memory;
based on the color information of the target pixel, obtaining description information of the region where the target object is located from the partition data table in an indexing manner;
and displaying the description information of the region where the target object is located.
4. The method of claim 3, wherein the bitmap is a 24-bit color bitmap; the loading the data of the bitmap from the hard disk and storing the data of the bitmap into a memory comprises:
loading color information of pixels in the 24-bit color bitmap from the hard disk;
allocating area identification codes to the areas in the 24-bit color bitmap;
constructing a second mapping relation between the area identification code and the color information of the area in the 24-bit color bitmap;
compressing the color information of the pixels in the 24-bit color bitmap into corresponding area identification codes based on the second mapping relation and storing the corresponding area identification codes in a memory;
the reading out the color information of the target pixel from the memory includes:
reading the area identification code corresponding to the target pixel from the memory;
and obtaining the color information of the target pixel according to the area identification code corresponding to the target pixel and the second mapping relation.
5. The method of claim 4, wherein said assigning a region identification code to a region in said 24-bit color bitmap comprises:
determining a total number of regions contained in the 24-bit color bitmap;
determining the minimum bit width of the area identification code through a logarithmic function with the base 2 as the base according to the total number;
generating an area identification code according to the minimum bit width;
and assigning the area identification codes to the areas in the 24-bit color bitmap one to one.
6. The method of claim 1 or 2, wherein the region-to-one coloring in the bitmap coordinate system based on the second contour line comprises:
determining a corresponding image boundary line of the scene map in the bitmap coordinate system;
determining a plurality of areas divided by the image boundary line and the second contour line in the bitmap coordinate system;
filling different colors into different areas in the plurality of areas respectively;
and filling the second contour line into the color of any one of the areas divided by the second contour line.
7. The method of claim 1 or 2, wherein the obtaining a first contour line of a region in a scene map comprises:
and acquiring a first contour line drawn by the line segment drawing control on the area in the scene map.
8. The method according to claim 1 or 2, wherein after the obtaining the bitmap corresponding to the scene map, the method further comprises:
updating the first contour line based on the modified scene map; mapping the updated first contour line into the bitmap to obtain an updated second contour line;
and in the bitmap, performing one-to-one coloring on the change area based on the updated second contour line to obtain a new bitmap.
9. The method according to claim 2, wherein the establishing of the bitmap coordinate system corresponding to the scene map according to the size of the scene map and the preset division precision comprises:
acquiring the number of transverse pixels in the bitmap coordinate system according to the transverse size and preset division precision of the scene map; acquiring the number of longitudinal pixels in the bitmap coordinate system according to the longitudinal size of the scene map and the preset division precision;
and establishing the bitmap coordinate system by taking one vertex of the scene map as an origin and the number of the horizontal pixels and the number of the vertical pixels.
10. An area marking device, comprising:
the first contour line acquisition unit is used for acquiring a first contour line of an area in a scene map;
the second contour line obtaining unit is used for mapping the first contour line to a bitmap coordinate system corresponding to the scene map to obtain a second contour line;
and the coloring unit is used for performing one-to-one region coloring in the bitmap coordinate system based on the second contour line to obtain a bitmap corresponding to the scene map.
11. The apparatus according to claim 10, wherein the second contour line obtaining unit includes:
a coordinate system establishing subunit, configured to establish a bitmap coordinate system corresponding to the scene map according to the size of the scene map and a preset division precision;
the first coordinate acquisition subunit is used for extracting a first coordinate of a vertex in the first contour line in a scene map coordinate system;
the second coordinate acquisition subunit is used for mapping the first coordinate from the scene map coordinate system to the bitmap coordinate system to obtain a second coordinate of the vertex in the bitmap coordinate system;
the interpolation subunit is used for obtaining a coordinate set of a line segment which takes the adjacent vertex as two end points in the bitmap coordinate system by an interpolation method based on the second coordinate of the adjacent vertex in the first contour line; the coordinate set corresponds to a line segment which takes the adjacent vertexes as two end points in the first contour line;
and the second contour line obtaining subunit is used for obtaining the second contour line according to the coordinate set corresponding to each line segment in the first contour line.
12. The apparatus of claim 10 or 11, further comprising:
a first storage unit, configured to store data of the bitmap to a hard disk; the data of the bitmap comprises color information of pixels in the bitmap;
a partition data table generating unit, configured to generate a partition data table according to a first mapping relationship between color information of a region and description information of the region in the bitmap;
a loading unit, configured to load data of the bitmap from the hard disk in a loading stage of the scene map;
the second storage unit is used for storing the bitmap data loaded by the loading unit into a memory;
the position information acquisition unit is used for acquiring the position information of the target object in the scene map in the display stage of the scene map;
the pixel determining unit is used for determining a corresponding target pixel of the target object in the bitmap through coordinate conversion according to the position information;
a color information reading unit, configured to read out color information of the target pixel from the memory;
the description information indexing unit is used for indexing the description information of the region where the target object is located from the partition data table based on the color information of the target pixel;
and the display unit is used for displaying the description information of the area where the target object is located.
13. The apparatus of claim 12, wherein the bitmap is a 24-bit color bitmap; the loading unit is specifically configured to load color information of pixels in the 24-bit color bitmap from the hard disk;
the second storage unit includes:
the area identification code sub-unit is used for distributing area identification codes for the areas in the 24-bit color bitmap;
the mapping relation construction subunit is used for constructing a second mapping relation between the area identification code and the color information of the area in the 24-bit color bitmap;
the compression storage subunit is used for compressing the color information of the pixels in the 24-bit color bitmap into a corresponding area identification code based on the second mapping relation and storing the area identification code in an internal memory;
the color information reading unit includes:
the area identification code reading subunit is used for reading the area identification code corresponding to the target pixel from the memory;
and the color information determining subunit is used for obtaining the color information of the target pixel according to the area identification code corresponding to the target pixel and the second mapping relation.
14. An area labeling apparatus, comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the region labeling method according to any one of claims 1 to 9 according to instructions in the program code.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium is configured to store a program code for performing the region labeling method of any one of claims 1 to 9.
CN202110809005.9A 2021-07-16 2021-07-16 Region marking method, device, equipment and storage medium Active CN113457163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110809005.9A CN113457163B (en) 2021-07-16 2021-07-16 Region marking method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110809005.9A CN113457163B (en) 2021-07-16 2021-07-16 Region marking method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113457163A true CN113457163A (en) 2021-10-01
CN113457163B CN113457163B (en) 2023-09-15

Family

ID=77880794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110809005.9A Active CN113457163B (en) 2021-07-16 2021-07-16 Region marking method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113457163B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1185012A (en) * 1997-09-04 1999-03-30 Nissan Motor Co Ltd Method of plotting stereoscopic map and navigation system using it, and recording medium in which stereoscopic map plotting program is recorded
JP2006309802A (en) * 2006-08-17 2006-11-09 Sony Corp Image processor and image processing method
CN108022285A (en) * 2017-11-30 2018-05-11 杭州电魂网络科技股份有限公司 Map rendering intent and device
CN110956673A (en) * 2018-09-26 2020-04-03 北京高德云图科技有限公司 Map drawing method and device
CN111275730A (en) * 2020-01-13 2020-06-12 平安国际智慧城市科技股份有限公司 Method, device and equipment for determining map area and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1185012A (en) * 1997-09-04 1999-03-30 Nissan Motor Co Ltd Method of plotting stereoscopic map and navigation system using it, and recording medium in which stereoscopic map plotting program is recorded
JP2006309802A (en) * 2006-08-17 2006-11-09 Sony Corp Image processor and image processing method
CN108022285A (en) * 2017-11-30 2018-05-11 杭州电魂网络科技股份有限公司 Map rendering intent and device
CN110956673A (en) * 2018-09-26 2020-04-03 北京高德云图科技有限公司 Map drawing method and device
CN111275730A (en) * 2020-01-13 2020-06-12 平安国际智慧城市科技股份有限公司 Method, device and equipment for determining map area and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗林等: "Windows游戏编程", vol. 1, 华南理工大学出版社, pages: 89 - 93 *

Also Published As

Publication number Publication date
CN113457163B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN111292405B (en) Image rendering method and related device
JP7024132B2 (en) Object display method, terminal device, and computer program
CN106547599B (en) Method and terminal for dynamically loading resources
CN110069580B (en) Road marking display method and device, electronic equipment and storage medium
US20160232707A1 (en) Image processing method and apparatus, and computer device
CN107193518B (en) Information display method and terminal equipment
US11260300B2 (en) Image processing method and apparatus
CN109584341B (en) Method and device for drawing on drawing board
WO2014173187A1 (en) Systems and methods for path finding in maps
CN109726368B (en) Map marking method and device
CN109885373B (en) Rendering method and device of user interface
CN108888954A (en) A kind of method, apparatus, equipment and storage medium picking up coordinate
CN113129417A (en) Image rendering method in panoramic application and terminal equipment
CN113457163B (en) Region marking method, device, equipment and storage medium
US20140324342A1 (en) Systems and Methods for Path Finding in Maps
CN114511438A (en) Method, device and equipment for controlling load
CN111367502A (en) Numerical value processing method and device
CN112308766B (en) Image data display method and device, electronic equipment and storage medium
CN115375594A (en) Image splicing method and device and related product
CN114064017A (en) Drawing method and related equipment
CN110196878A (en) A kind of data visualization stereo display method, display device and storage medium
CN116704107B (en) Image rendering method and related device
CN112044080B (en) Virtual object management method and related device
CN117152327B (en) Parameter adjusting method and related device
CN111612921A (en) Collision range determining method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40053922

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant