CN112363987A - File compression method and device, file loading method and device and electronic equipment - Google Patents

File compression method and device, file loading method and device and electronic equipment Download PDF

Info

Publication number
CN112363987A
CN112363987A CN202011265395.XA CN202011265395A CN112363987A CN 112363987 A CN112363987 A CN 112363987A CN 202011265395 A CN202011265395 A CN 202011265395A CN 112363987 A CN112363987 A CN 112363987A
Authority
CN
China
Prior art keywords
file
binary
binary file
information
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011265395.XA
Other languages
Chinese (zh)
Inventor
婊¤揪
满达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202011265395.XA priority Critical patent/CN112363987A/en
Publication of CN112363987A publication Critical patent/CN112363987A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/174Redundancy elimination performed by the file system
    • G06F16/1744Redundancy elimination performed by the file system using compression, e.g. sparse files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The disclosure provides a file compression method and device, a file loading method and device and an electronic device, wherein the file compression method comprises the following steps: acquiring a first type file and a second type file, wherein the first type file comprises a first binary file; acquiring a second binary file, wherein the second binary file is obtained by converting a second type file and only comprises a binary file; and mixing the first binary file and the second binary file to obtain a compressed third binary file, wherein the arrangement mode of the first binary file and the second binary file can be represented by file position information.

Description

File compression method and device, file loading method and device and electronic equipment
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a file compression method and apparatus, a file loading method and apparatus, and an electronic device.
Background
With the rapid development of internet technology and big data technology, part of data processing can be processed at the edge device side. Therefore, the model file and the resource file need to be loaded on the edge device side for data processing.
In implementing the disclosed concept, the inventors found that there are at least the following problems in the related art: when the model file and the resource file are loaded on the basis of the mainstream computing framework, the information safety and the information transmission convenience of the model file and the resource file cannot meet the requirements of users.
Disclosure of Invention
In view of this, the present disclosure provides a file compression method and apparatus, a file loading method and apparatus, and an electronic device, which can improve information security and information propagation freedom.
One aspect of the present disclosure provides a file compression method executed by an edge device, where a cloud is connected to a plurality of edge devices, respectively, and the method may include: acquiring a first type file and a second type file, wherein the first type file comprises a first binary file; acquiring a second binary file, wherein the second binary file is obtained by converting a second type file and only comprises a binary file; and mixing the first binary file and the second binary file to obtain a compressed third binary file, wherein the arrangement mode of the first binary file and the second binary file can be represented by file position information.
According to an embodiment of the present disclosure, the method further includes: analyzing the third binary file to determine file position information; and writing the file position information into the third binary file.
According to an embodiment of the present disclosure, the file location information includes: at least one of file identification information, file start position information, file end position information, file index information and file length information.
According to an embodiment of the present disclosure, the first type file is a model file, the model file including model topology information and model parameters; obfuscating the first binary file and the second binary file includes: determining model topology information and at least part of model parameters of each at least one model from at least one first binary file; writing the model topology information and at least part of the model parameters of the obfuscated at least one model into a third binary file; and writing the at least one first binary file and the at least one second binary file into the third binary file in sequence.
According to an embodiment of the present disclosure, sequentially writing the at least one first binary file and the at least one second binary file into the third binary file includes: determining an offset, wherein the offset is used for changing the file starting position of the first binary file and/or the second binary file in the third binary file; and sequentially writing the at least one first binary file and the at least one second binary file into a third binary file based on the offset.
According to the embodiment of the disclosure, the first confusion order of the model topology information and at least part of the model parameters of each of the at least one first binary file is the same as or different from the second confusion order of each of the at least one first binary file, and the model topology information and at least part of the model parameters of each of the at least one first binary file can be determined through the file position information.
According to an embodiment of the present disclosure, the method further includes: after the compressed third binary file is obtained, transmitting the compressed third binary file to at least one of the plurality of edge device terminals; and/or transmitting authorization information to at least one of the plurality of edge device terminals, wherein the authorization information is a character string enabling the at least one of the plurality of edge device terminals to parse the file location information.
Another aspect of the present disclosure provides a file loading method performed by an edge device side, including: acquiring a third binary file, wherein the third binary file comprises a first binary file and a second binary file which are mixed up, and the arrangement mode of the first binary file and the second binary file can be represented by file position information; and loading the file from the head of the third binary file to the tail of the third binary file based on the file position information in a memory mapping file mode.
According to the embodiment of the present disclosure, loading a file from the head of the third binary file to the tail of the third binary file based on the file location information in a memory mapped file manner includes: mapping the third binary file to a designated storage space from the head of the third binary file to the tail of the third binary file; acquiring file position information; and loading the required first type file and/or second type file from the designated storage space based on the file position information.
According to the embodiment of the disclosure, loading the required first type file and/or second type file from the designated storage space based on the file position information comprises: determining a required first binary file and/or a required second binary file from a specified storage space based on the file position information; determining model topology information and at least part of model parameters corresponding to the required first binary file based on the file position information; and determining a required first type file based on the required first binary file and the model topology information and at least part of the model parameters corresponding to the required first binary file, and/or determining a required second type file based on the required second binary file.
According to an embodiment of the present disclosure, acquiring the file location information includes: acquiring authorization information, wherein the authorization information is a character string which enables at least one of a plurality of edge equipment terminals to analyze file position information; and determining file location information from the specified storage space based on the authorization information.
Another aspect of the present disclosure provides a file compression apparatus, which is disposed in a cloud, the cloud is connected to at least one edge device, and the apparatus includes: the device comprises a file to be compressed acquisition module, a second binary file acquisition module and a file confusion module. The file to be compressed acquisition module is used for acquiring a first type file and a second type file, wherein the first type file comprises a first binary file; the second binary file acquisition module is used for acquiring a second binary file, wherein the second binary file is obtained by converting a second type file, and only comprises the binary file; and the file obfuscating module is used for obfuscating the first binary file and the second binary file to obtain a compressed third binary file, wherein the arrangement mode of the first binary file and the second binary file can be represented by file position information.
According to the embodiment of the disclosure, the device further comprises a file position information analyzing module and a file position information writing module, wherein the file position information analyzing module is used for analyzing the third binary file to determine the file position information; and the file position information writing module is used for writing the file position information into the third binary file.
According to an embodiment of the present disclosure, the file location information includes: at least one of file index information, file length information, file identification information, file start position information, and file end position information.
According to an embodiment of the present disclosure, the first type file is a model file, the model file including model topology information and model parameters; the file obfuscation module includes: the device comprises a model topology information determining submodule, a model topology information writing submodule and a file writing submodule. The model topology information determining submodule is used for determining model topology information and at least part of model parameters of at least one model from at least one first binary file; the model topology information writing submodule is used for writing the respective model topology information and at least part of model parameters of the confused model into a third binary file; and the file writing submodule is used for sequentially writing the at least one first binary file and the at least one second binary file into the third binary file.
According to an embodiment of the present disclosure, the file writing submodule includes: an offset determination unit and a file writing unit. The offset determining unit is used for determining an offset, and the offset is used for changing the file starting position of the first binary file and/or the second binary file in the third binary file; and the file writing unit is used for writing the at least one first binary file and/or the at least one second binary file into the third binary file one by one based on the offset.
According to the embodiment of the disclosure, the first confusion order of the model topology information and at least part of the model parameters of each of the at least one first binary file is the same as or different from the second confusion order of each of the at least one first binary file, and the model topology information and at least part of the model parameters of each of the at least one first binary file can be determined through the file position information.
According to the embodiment of the disclosure, the cloud end is connected with at least one edge device end, and the device further comprises at least one of the following modules: the file transmission module is used for transmitting the compressed third binary file to at least one of the edge device terminals; and the authorization information transmission module is used for transmitting the authorization information to at least one of the plurality of edge device terminals, wherein the authorization information is a character string which enables the at least one of the plurality of edge device terminals to analyze the file position information.
Another aspect of the present disclosure provides a file loading apparatus disposed at an edge device side, the apparatus including: a third binary file acquisition module and a file loading module. The third binary file acquisition module is used for acquiring a third binary file, the third binary file comprises a first binary file and a second binary file which are mixed up, and the arrangement mode of the first binary file and the second binary file can be represented by file position information; and the file loading module is used for loading the file from the head of the third binary file to the tail of the third binary file based on the file position information in a memory mapping file mode.
According to an embodiment of the present disclosure, a file loading module includes: the device comprises a memory mapping submodule, a file position information acquisition submodule and a loading submodule. The memory mapping submodule is used for mapping the third binary file to the designated storage space from the head of the third binary file to the tail of the third binary file; the file position information acquisition submodule is used for acquiring file position information; and the loading submodule is used for loading the required first type file and/or second type file from the specified storage space based on the file position information.
According to an embodiment of the present disclosure, the loading submodule includes a binary file determining unit configured to determine a required first binary file and/or second binary file from a specified storage space based on file location information; a model topology determining unit, configured to determine, based on the file location information, model topology information and at least part of the model parameters corresponding to the required first binary file; and a file determining unit for determining the required first type file based on the required first binary file and the model topology information and at least part of the model parameters corresponding to the required first binary file, and/or determining the required second type file based on the required second binary file.
According to the embodiment of the present disclosure, the file location information obtaining sub-module includes: an authorization information unit and a file location information determination unit. The authorization information unit is used for acquiring authorization information, wherein the authorization information is a character string which enables at least one of the edge equipment terminals to analyze the file position information; and a file location information determining unit for determining file location information from the specified storage space based on the authorization information.
Another aspect of the present disclosure provides a computer system comprising one or more processors and a storage device, wherein the storage device is configured to store executable instructions that, when executed by the processors, implement the method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, the resource file and the model file are subjected to confusion compression in the form of binary files, so that the resource file and the model file cannot be easily snooped and cracked, and the information security is effectively improved.
According to the embodiment of the disclosure, the resource files and the model files are tiled and arranged, and a memory mapping mode is utilized, so that quick loading is conveniently realized.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates a file compression method and apparatus, a file loading method and apparatus, and an application scenario of an electronic device according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a system architecture of an electronic device, and a method and an apparatus for file compression, a method and an apparatus for file loading according to an embodiment of the disclosure;
FIG. 3 schematically illustrates a flow chart of a file compression method according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a data flow diagram of a file compression method according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow diagram of a file compression method according to another embodiment of the present disclosure;
FIG. 6 schematically shows a flow diagram of a file loading method according to an embodiment of the present disclosure;
FIG. 7 schematically illustrates a data flow diagram of a file loading method according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a block diagram of a file compression apparatus according to an embodiment of the present disclosure;
FIG. 9 schematically illustrates a block diagram of a file loading apparatus according to another embodiment of the present disclosure; and
FIG. 10 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The embodiment of the disclosure provides a file compression method executed by a cloud. The method comprises a binary file acquisition process and a file obfuscation process. In the binary file obtaining process, first, a first type file and a second type file are obtained, the first type file comprises a first binary file, then, a second binary file is obtained, the second binary file is obtained by converting the second type file, and the second binary file only comprises the binary file. And after the binary file acquisition process is finished, entering a file obfuscating process, obfuscating the first binary file and the second binary file to acquire a compressed third binary file, wherein the arrangement mode of the first binary file and the second binary file can be represented by file position information.
The embodiment of the disclosure also provides a file loading method executed by the edge device side. The method comprises a binary file acquisition process and a file loading process. In the binary file obtaining process, obtaining a third binary file, wherein the third binary file comprises a first binary file and a second binary file which are mixed up, and the arrangement mode of the first binary file and the second binary file can be represented by file position information. And after the binary file acquisition process is finished, entering a file loading process, and loading the file from the head of the third binary file to the tail of the third binary file based on the file position information in a memory mapping file mode.
In the following, some concepts related to the present disclosure are first exemplified to better understand embodiments of the present disclosure.
Edge computing refers to an open platform integrating network, computing, storage and application core capabilities at one side close to an object or a data source to provide nearest-end services nearby. The application program is initiated at the edge side, so that a faster network service response is generated, and the substrate requirements of the industry in the aspects of real-time business, application intelligence, safety, privacy protection and the like are met. The edge computation is between the physical entity and the industrial connection, or on top of the physical entity. And the cloud computing still can access the historical data of the edge computing.
For scenes such as the internet of things, a lot of control can be realized through local internet of things nodes without being handed to a cloud, the processing process can be completed in a local edge computing layer, the load of the cloud is reduced, and faster response is provided due to the fact that the local edge computing layer is close to a user.
Models such as deep neural networks have begun to be applied to various edge computing devices today. During the operation and deployment of the edge computing device, the relevant resource files and model files need to be loaded.
And loading the model file and the resource file based on the mainstream computing framework, wherein the format of the model file is output based on the format of the mainstream popular computing framework, and the format of the resource file is clear at a glance. Not friendly to confidentiality and free-propagation deployment, and is also time consuming to load such files on edge computing devices. This results in the information security of the resource file and the model file being not guaranteed, for example, the content format of the sub-files and the detailed details of the model file can be snooped based on popular parsing tools. Loading resource files and model files is time consuming and not conducive to deployment implementation.
Fig. 1 schematically shows a file compression method and apparatus, a file loading method and apparatus, and an application scenario of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 1, a scenario including two types of files is taken as an example for explanation, a plurality of first type files and second type files are respectively converted into binary files (e.g., binary files 1 and n of the first type files, binary files 1 and n of the second type files, etc.) to implement compression, and after being confused, the plurality of binary files are written into a new binary file, and the arrangement order of each type of file in the new binary file can be determined through analysis. The new binary file comprises key information (such as layer structure information and parameters of the model) of a plurality of files and the binary files of a plurality of different types of written files in a mixed mode, so that the contents and the like of various types of files cannot be easily read from the new binary file, and information encryption is realized. When the file loading is needed, the new binary file can be restored to the initial first type file and the second type file, and the file loading process is convenient to complete. Where n may be a positive integer greater than 1. It should be noted that the above two types of files are only shown by way of example, and are not to be construed as limiting the disclosure, and the compression and loading processes may also be performed for more types (e.g., 3 or more) or less types (e.g., 1) of files.
Fig. 2 schematically shows a schematic diagram of a system architecture suitable for a file compression method and apparatus, a file loading method and apparatus, and an electronic device according to an embodiment of the present disclosure. It should be noted that fig. 2 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 2, the system architecture 200 according to this embodiment may include terminal devices 201, 202, 203, a network 204, a server 205. The network 204 serves as a medium for providing communication links between the terminal devices 201, 202, 203 and the server 205. Network 204 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 201, 202, 203 to interact with the server 205 via the network 204 to receive or send messages or the like. The terminal devices 201, 202, 203 may have various communication client applications installed thereon, such as a shopping application, a web browser application, an operation and maintenance application, a search application, an instant messaging tool, a mailbox client, social platform software, etc. (by way of example only).
The terminal devices 201, 202, 203 may be various electronic devices having display screens and supporting web browsing, including but not limited to smart phones, tablets, laptop portable computers, desktop computers, industrial computers, terminal servers, and the like.
The server 205 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 201, 202, 203. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the file compression method provided by the embodiment of the present disclosure may be generally executed by the server 205. Accordingly, the file compression apparatus provided by the embodiments of the present disclosure may be generally disposed in the server 205. The file loading method provided by the embodiment of the present disclosure may be generally executed by the terminal devices 201, 202, and 203. Accordingly, the file loading apparatus provided by the embodiments of the present disclosure may be generally disposed in the terminal devices 201, 202, and 203.
It should be understood that the number of terminal devices, networks, and servers are merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
FIG. 3 schematically shows a flow chart of a file compression method according to an embodiment of the present disclosure.
As shown in fig. 3, the method includes operations S301 to S305.
In operation S301, a first type file and a second type file are obtained, the first type file including a first binary file.
In this embodiment, the first type file and the second type file may be different types of files, specifically, the first type file may be a binary file, and the second type file may be a file including at least a non-binary file. For example, the first type of file may be a model file, including but not limited to files of various neural network and like models. The second type of file may be a resource file including, but not limited to, text, pictures, etc. The neural network may include a network topology and network parameters, and the network topology may include layer information and the like. The network parameters may include weights, offsets, and the like.
The model file may be stored as a binary file, which may include data structures, formatted and serialized files.
In operation S303, a second binary file is obtained, where the second binary file is obtained by converting a second type file, and the second binary file only includes binary files.
In this embodiment, the second type file including the non-binary file (or the non-binary file therein) may be converted into a binary file, so as to encrypt the first type file and the second type file in an obfuscated manner. It should be noted that, if the second type file itself only includes the binary file, the process of converting the binary file is not needed.
In operation S305, the first binary file and the second binary file are obfuscated to obtain a compressed third binary file, wherein the arrangement of the first binary file and the second binary file may be characterized by file location information.
The first binary file and the second binary file may be randomly mixed, or the first binary file and the second binary file may be mixed according to a certain rule. If the first binary file and the second binary file are mixed according to a certain rule, the file position information is known.
Taking the example where the first type file is a model file, the model file including model topology information and model parameters, obfuscating the first binary file and the second binary file may include the following operations.
First, model topology information and at least part of model parameters of at least one model are determined from at least one first binary file. The model topology information may be, for example: the first layer is an input layer, the second layer is a fully-connected layer, the third layer is a convolutional layer, the fourth layer is a pooling layer, the fifth layer is a convolutional layer, the sixth layer is a pooling layer, the seventh layer is a fully-connected layer, the eighth layer is an output layer, and the like. At least some of the model parameters may be key parameters of the model, etc. For example, the model parameter may be a weight, a bias, or the like of a layer, and may also be a weight, a bias, or the like of a node. The model can be various artificial intelligence models, not only neural networks, but also random trees, linear regression, decision trees, support vector machines and the like.
Then, the model topology information and at least part of the model parameters of the obfuscated model are written into a third binary file. Specifically, the model topology information and at least part of the model parameters of the obfuscated at least one model may be written into a designated location of the third binary file, such as the top, the bottom, and the like, so as to facilitate reading of the key information.
And then, sequentially writing the at least one first binary file and the at least one second binary file into a third binary file. The writing sequence is not limited, for example, a first binary file is written first, then a second binary file is written, and then the steps are repeated until all the first binary file and the second binary file are written into the third binary file. In addition, two first binary files may be written in succession, and then two second binary files may be written. The first binary files in the plurality of first binary files may be randomly selected or selected according to a preset sequence.
In one embodiment, a first obfuscating order of the model topology information and at least part of the model parameters of each of the at least one first binary file is the same as or different from a second obfuscating order of each of the at least one first binary file, and the model topology information and at least part of the model parameters of each of the at least one first binary file may be determined by the file location information. Wherein, when the first confusion order is the same as the second confusion order, the model parameters corresponding to the first binary file are convenient to be quickly determined. When the first confusion sequence is different from the second confusion sequence, the information security is convenient to improve.
In one embodiment, writing the at least one first binary file and the at least one second binary file to the third binary file in sequence may include first determining an offset for changing a file start position of the first binary file and/or the second binary file in the third binary file. Then, the at least one first binary file and/or the at least one second binary file are written into the third binary file one by one based on the offset. Wherein the offset for each of the first binary file and the second binary file may be fixed or random. The offset may also be stored at a specified location of the third binary file.
In one embodiment, the method further comprises: the third binary file is parsed to determine file location information, and the file location information is written to the third binary file. Thus, it can be determined by way of parsing at which positions of the third binary file the first binary file and the second binary file are stored, respectively.
For example, the file location information includes: at least one of file index information, file length information, file identification information, file start position information, and file end position information.
Fig. 4 schematically shows a data flow diagram of a file compression method according to an embodiment of the present disclosure.
As shown in fig. 4, the layer types of the model and the key computation parameters are first exported to a binary file, generating a new binary file, such as model.
For the model file, key information (such as information info) and file length are obtained, offset (Salt) is performed to add extra confusion, and finally the file location and file length are written.
For the resource file, if the resource file is text information, the compression is needed, and if the resource file is in a binary format, the compression is not needed. And acquiring the length of the resource file, adding Salt, adding extra confusion, and writing the extra confusion into a new binary file.
In another embodiment, in order to facilitate file loading on the device side, the cloud side is connected with at least one edge device side.
FIG. 5 schematically shows a flow diagram of a file compression method according to another embodiment of the present disclosure.
As shown in fig. 5, after the compressed third binary file is acquired in operation S305, the method may further include operation S507.
In operation S507, the compressed third binary file and/or the authorization information are transmitted to at least one of the plurality of edge devices.
For example, the compressed third binary file is transmitted to at least one of the plurality of edge devices. For another example, the authorization information is transmitted to at least one of the plurality of edge device terminals, where the authorization information is a character string that enables the at least one of the plurality of edge device terminals to parse the file location information. It should be noted that the compressed third binary file and/or the authorization information may also be pre-stored in the edge device side, so as to facilitate file loading at the edge device side.
According to the file compression method provided by the embodiment of the disclosure, after the first type file and the second type file are converted into the binary file, confusion compression is performed, so that the content of the file is not easily snooped and cracked outside, and the information security is effectively improved.
Another aspect of the present disclosure also provides a file loading method.
Fig. 6 schematically shows a flow chart of a file loading method according to an embodiment of the present disclosure.
As shown in fig. 6, the file loading method performed by the edge device side may include operations S601 to S603
In operation S601, a third binary file is obtained, where the third binary file includes a first binary file and a second binary file that are confused, and the arrangement manner of the first binary file and the second binary file may be represented by file location information.
In operation S603, file loading is performed from the head of the third binary file to the tail of the third binary file based on the file location information in a memory mapped file manner.
The first binary file, the second binary file, the third binary file and the file position information can refer to the relevant part of the file compression contents, and are not listed in a list.
Fig. 7 schematically shows a data flow diagram of a file loading method according to an embodiment of the present disclosure.
As shown in fig. 7, when the operation is loaded, a new binary file as shown in fig. 4 is obtained. Advanced memory mapping is loaded from head to tail in sequence, and such a strategy can accelerate the loading speed, for example, the relationship information is convenient to read quickly. Wherein, for the second type file which is converted by the binary file, the second type file can be decompressed in the decompression space.
In one embodiment, in a memory-mapped file manner, loading the file from the head of the third binary file to the tail of the third binary file based on the file location information may include the following operations.
First, the third binary file is mapped to the designated storage space from the head of the third binary file to the tail of the third binary file.
Then, file location information is acquired.
Then, the required first type file and/or second type file are loaded from the designated storage space based on the file position information.
In one embodiment, loading the required first type of file and/or second type of file from the specified storage space based on the file location information may include the following operations.
First, a required first binary file and/or second binary file is determined from a specified storage space based on file location information.
Model topology information and at least a portion of the model parameters corresponding to the desired first binary file are then determined based on the file location information.
Then, a required first type file is determined based on the required first binary file and the model topology information and at least part of the model parameters corresponding to the required first binary file, and/or a required second type file is determined based on the required second binary file.
For example, acquiring the file location information may include the following operations.
Firstly, authorization information is obtained, wherein the authorization information is a character string which enables at least one of a plurality of edge device terminals to analyze file position information. The authorization information can be obtained in a cloud sending mode, and can also be information such as characters, patterns and scanning codes printed on a real object. This facilitates service fees to the service provider, etc.
Then, file location information is determined from the specified storage space based on the authorization information.
According to the file loading method provided by the embodiment of the disclosure, the plurality of resource files and the model files are tiled and arranged, and a memory mapping mode is utilized, so that the quick loading is facilitated.
Another aspect of the present disclosure provides a file compression apparatus.
FIG. 8 schematically shows a block diagram of a file compression apparatus according to an embodiment of the present disclosure.
As shown in fig. 8, the file compression apparatus 800 is disposed in a cloud, and the cloud is connected to at least one edge device, and may include: a file to be compressed obtaining module 810, a second binary file obtaining module 820 and a file obfuscating module 830.
The to-be-compressed file obtaining module 810 is configured to obtain a first type file and a second type file, where the first type file includes a first binary file.
The second binary file obtaining module 820 is configured to obtain a second binary file, where the second binary file is obtained by converting a second type file, and the second binary file only includes binary files.
The file obfuscating module 830 is configured to obfuscate the first binary file and the second binary file to obtain a compressed third binary file, where an arrangement manner of the first binary file and the second binary file may be represented by file location information.
In one embodiment, the apparatus further includes a file location information parsing module and a file location information writing module.
The file position information analysis module is used for analyzing the third binary file to determine the file position information.
And the file position information writing module is used for writing the file position information into the third binary file.
In one embodiment, the file location information includes: at least one of file index information, file length information, file identification information, file start position information, and file end position information.
In one embodiment, the first type file is a model file that includes model topology information and model parameters. Accordingly, the file obfuscation module 830 includes: the device comprises a model topology information determining submodule, a model topology information writing submodule and a file writing submodule.
The model topology information determining submodule is used for determining model topology information and at least part of model parameters of at least one model from at least one first binary file.
And the model topology information writing submodule is used for writing the respective model topology information and at least part of the model parameters of the confused model into a third binary file.
The file writing submodule is used for sequentially writing the at least one first binary file and the at least one second binary file into the third binary file.
In one embodiment, the file writing submodule includes: an offset determination unit and a file writing unit.
The offset determining unit is used for determining an offset, and the offset is used for changing the file starting position of the first binary file and/or the second binary file in the third binary file.
The file writing unit is used for writing the at least one first binary file and/or the at least one second binary file into the third binary file one by one based on the offset.
In one embodiment, a first obfuscating order of the model topology information and at least part of the model parameters of each of the at least one first binary file is the same as or different from a second obfuscating order of each of the at least one first binary file, and the model topology information and at least part of the model parameters of each of the at least one first binary file may be determined by the file location information.
In one embodiment, the cloud is connected to at least one edge device, and the apparatus further includes at least one of: the file transmission module and the authorization information transmission module.
The file transmission module is used for transmitting the compressed third binary file to at least one of the plurality of edge device terminals.
The authorization information transmission module is used for transmitting authorization information to at least one of the plurality of edge device terminals, wherein the authorization information is a character string which enables the at least one of the plurality of edge device terminals to analyze the file position information.
Another aspect of the present disclosure provides a file compression apparatus.
Fig. 9 schematically shows a block diagram of a file loading apparatus according to another embodiment of the present disclosure.
As shown in fig. 9, the file loading apparatus 900 may include a third binary file acquiring module 910 and a file loading module 920.
The third binary file obtaining module 910 is configured to obtain a third binary file, where the third binary file includes a first binary file and a second binary file that are mixed up, and the arrangement manner of the first binary file and the second binary file may be represented by file location information.
The file loading module 920 is configured to load a file from the head of the third binary file to the tail of the third binary file based on the file location information in a memory mapped file manner.
In one embodiment, file loading module 920 includes: the device comprises a memory mapping submodule, a file position information acquisition submodule and a loading submodule.
The memory mapping submodule is used for mapping the third binary file to the designated storage space from the head of the third binary file to the tail of the third binary file.
The file position information acquisition submodule is used for acquiring file position information.
And the loading submodule is used for loading the required first type file and/or second type file from the specified storage space based on the file position information.
In one embodiment, the loading submodule comprises a binary file determination unit,
The binary file determining unit is used for determining the required first binary file and/or second binary file from the designated storage space based on the file position information.
And the model topology determining unit is used for determining model topology information and at least part of model parameters corresponding to the required first binary file based on the file position information.
And the file determining unit is used for determining the required first type file based on the required first binary file, the model topology information corresponding to the required first binary file and at least part of the model parameters, and/or determining the required second type file based on the required second binary file.
In one embodiment, the file location information obtaining sub-module includes: an authorization information unit and a file location information determination unit.
The authorization information unit is used for obtaining authorization information, wherein the authorization information is a character string which enables at least one of the edge device terminals to analyze the file position information.
The file location information determining unit is used for determining file location information from the specified storage space based on the authorization information.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any plurality of the to-be-compressed file obtaining module 810, the second binary file obtaining module 820 and the file obfuscating module 830 may be combined and implemented in one module, or any one of the modules may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the file acquiring module to be compressed 810, the second binary file acquiring module 820 and the file obfuscating module 830 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware and firmware, or implemented by a suitable combination of any several of them. Alternatively, at least one of the to-be-compressed file obtaining module 810, the second binary file obtaining module 820 and the file obfuscation module 830 may be at least partially implemented as a computer program module, which, when executed, may perform a corresponding function.
FIG. 10 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, an electronic device 1000 according to an embodiment of the present disclosure includes a processor 1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. Processor 1001 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 1001 may also include onboard memory for caching purposes. The processor 1001 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the present disclosure.
In the RAM1003, various programs and data necessary for the operation of the system 1000 are stored. The processor 1001, ROM 1002, and RAM1003 are connected to each other by a bus 1004. The processor 1001 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 1002 and/or the RAM 1003. Note that the program may also be stored in one or more memories other than the ROM 1002 and the RAM 1003. The processor 1001 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in one or more memories.
System 1000 may also include an input/output (I/O) interface 1005, the input/output (I/O) interface 1005 also being connected to bus 1004, according to an embodiment of the present disclosure. The system 1000 may also include one or more of the following components connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output section 1007 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011. The computer program performs the above-described functions defined in the system of the embodiment of the present disclosure when executed by the processor 1001. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 1002 and/or the RAM1003 described above and/or one or more memories other than the ROM 1002 and the RAM 1003.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (15)

1. A file compression method executed by a cloud end comprises the following steps:
acquiring a first type file and a second type file, wherein the first type file comprises a first binary file;
acquiring a second binary file, wherein the second binary file is obtained by converting the second type file, and the second binary file only comprises a binary file; and
and mixing the first binary file and the second binary file to obtain a compressed third binary file, wherein the arrangement modes of the first binary file and the second binary file can be represented by file position information.
2. The method of claim 1, further comprising:
analyzing the third binary file to determine the file position information; and
and writing the file position information into the third binary file.
3. The method of claim 1 or 2, wherein the file location information comprises: at least one of file index information, file length information, file identification information, file start position information, and file end position information.
4. The method of claim 1, wherein the first type file is a model file comprising model topology information and model parameters;
the obfuscating the first binary file and the second binary file includes:
determining respective model topology information and at least part of the model parameters of at least one model from at least one of the first binary files;
writing the respective model topology information of the obfuscated at least one model and at least part of the model parameters into the third binary file; and
and sequentially writing at least one first binary file and at least one second binary file into the third binary file.
5. The method of claim 4, wherein said writing at least one of the first binary files and at least one of the second binary files to the third binary file in sequence comprises:
determining an offset, wherein the offset is used for changing the file starting position of the first binary file and/or the second binary file in the third binary file; and
writing at least one of the first binary files and/or at least one of the second binary files to the third binary file one by one based on the offset.
6. The method of claim 4, wherein a first obfuscating order of model topology information and at least part of model parameters of each of the at least one first binary file is the same as or different from a second obfuscating order of each of the at least one first binary file, and the model topology information and at least part of the model parameters of each of the at least one first binary file are determinable from the file location information.
7. The method of claim 1, the cloud connected to at least one edge device, the method further comprising: after said obtaining the compressed third binary file,
transmitting the compressed third binary file to at least one of the plurality of edge device terminals;
and/or
And transmitting authorization information to at least one of the plurality of edge device terminals, wherein the authorization information is a character string which enables the at least one of the plurality of edge device terminals to analyze the file location information.
8. A file loading method executed by an edge device side comprises the following steps:
acquiring a third binary file, wherein the third binary file comprises a first binary file and a second binary file which are mixed up, and the arrangement modes of the first binary file and the second binary file can be represented by file position information; and
and loading the file from the head of the third binary file to the tail of the third binary file based on the file position information in a memory mapping file mode.
9. The method of claim 8, wherein the loading the file from the head of the third binary file to the tail of the third binary file based on the file location information in a memory mapped file manner comprises:
mapping the third binary file to a specified storage space from the head of the third binary file to the tail of the third binary file;
acquiring the file position information; and
and loading the required first type file and/or second type file from the specified storage space based on the file position information.
10. The method of claim 9, wherein the loading the required first type of file and/or second type of file from the specified storage space based on the file location information comprises:
determining a required first binary file and/or second binary file from the designated storage space based on the file position information;
determining model topology information corresponding to a required first binary file and at least part of the model parameters based on the file location information; and
determining a required first type file based on a required first binary file and model topology information corresponding to the required first binary file and at least part of the model parameters, and/or determining a required second type file based on the required second binary file.
11. The method of claim 9, wherein the obtaining the file location information comprises:
obtaining authorization information, wherein the authorization information is a character string which enables at least one of the edge device terminals to analyze the file position information; and
determining the file location information from the specified storage space based on the authorization information.
12. A file compression device arranged at the cloud, the device comprising:
the file compression method comprises a to-be-compressed file acquisition module, a compression module and a compression module, wherein the to-be-compressed file acquisition module is used for acquiring a first type file and a second type file, and the first type file comprises a first binary file;
a second binary file obtaining module, configured to obtain a second binary file, where the second binary file is obtained by converting the second type file, and the second binary file only includes a binary file; and
and the file obfuscating module is used for obfuscating the first binary file and the second binary file to obtain a compressed third binary file, wherein the arrangement modes of the first binary file and the second binary file can be represented by file position information.
13. A file loading apparatus disposed at an edge device side, the apparatus comprising:
the third binary file acquisition module is used for acquiring a third binary file, wherein the third binary file comprises a first binary file and a second binary file which are mixed up, and the arrangement modes of the first binary file and the second binary file can be represented by file position information; and
and the file loading module is used for loading files from the head of the third binary file to the tail of the third binary file based on the file position information in a memory mapping file mode.
14. A computer system, comprising:
one or more processors;
a storage device for storing executable instructions which, when executed by the processor, implement the method of any one of claims 1 to 11.
15. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, implement a method according to any one of claims 1 to 11.
CN202011265395.XA 2020-11-13 2020-11-13 File compression method and device, file loading method and device and electronic equipment Pending CN112363987A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011265395.XA CN112363987A (en) 2020-11-13 2020-11-13 File compression method and device, file loading method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011265395.XA CN112363987A (en) 2020-11-13 2020-11-13 File compression method and device, file loading method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112363987A true CN112363987A (en) 2021-02-12

Family

ID=74514849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011265395.XA Pending CN112363987A (en) 2020-11-13 2020-11-13 File compression method and device, file loading method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112363987A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656044A (en) * 2021-08-24 2021-11-16 平安科技(深圳)有限公司 Android installation package compression method and device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656044A (en) * 2021-08-24 2021-11-16 平安科技(深圳)有限公司 Android installation package compression method and device, computer equipment and storage medium
CN113656044B (en) * 2021-08-24 2023-09-19 平安科技(深圳)有限公司 Android installation package compression method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US20180089574A1 (en) Data processing device, data processing method, and computer-readable recording medium
CN104081713B (en) The long-range trust identification of server and client computer in cloud computing environment and geographical location
US20180336369A1 (en) Anonymity assessment system
US9246914B2 (en) Method and apparatus for processing biometric information using distributed computation
US9280665B2 (en) Fast and accurate identification of message-based API calls in application binaries
US9965616B2 (en) Cognitive password pattern checker to enforce stronger, unrepeatable passwords
US9207913B2 (en) API publication on a gateway using a developer portal
US9910724B2 (en) Fast and accurate identification of message-based API calls in application binaries
US20180060605A1 (en) Image obfuscation
US20200410106A1 (en) Optimizing Operating System Vulnerability Analysis
CN116530050A (en) Secure computing resource deployment using homomorphic encryption
CN113454594A (en) Native code generation for cloud services
CN113127361A (en) Application program development method and device, electronic equipment and storage medium
US10917478B2 (en) Cloud enabling resources as a service
CN111898135A (en) Data processing method, data processing apparatus, computer device, and medium
US10089356B2 (en) Processing window partitioning and ordering for on-line analytical processing (OLAP) functions
CN112363987A (en) File compression method and device, file loading method and device and electronic equipment
US20160328648A1 (en) Visual summary of answers from natural language question answering systems
US11748292B2 (en) FPGA implementation of low latency architecture of XGBoost for inference and method therefor
CN114756833A (en) Code obfuscation method, apparatus, device, medium, and program product
CN115514632A (en) Resource template arranging method, device and equipment for cloud service and storage medium
CN114816361A (en) Method, device, equipment, medium and program product for generating splicing project
CN110851754A (en) Webpage access method and system, computer system and computer readable storage medium
US20240154802A1 (en) Model protection method and apparatus
CN115577372A (en) Data interaction method, device and equipment applied to secret-related information network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination