CN111818139B - Wireless heterogeneous control computing system based on neural network - Google Patents

Wireless heterogeneous control computing system based on neural network Download PDF

Info

Publication number
CN111818139B
CN111818139B CN202010598370.5A CN202010598370A CN111818139B CN 111818139 B CN111818139 B CN 111818139B CN 202010598370 A CN202010598370 A CN 202010598370A CN 111818139 B CN111818139 B CN 111818139B
Authority
CN
China
Prior art keywords
calculation
neural network
host control
elastic
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010598370.5A
Other languages
Chinese (zh)
Other versions
CN111818139A (en
Inventor
赵凤萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dunyu Shanghai Internet Technology Co ltd
Original Assignee
Dunyu Shanghai Internet Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dunyu Shanghai Internet Technology Co ltd filed Critical Dunyu Shanghai Internet Technology Co ltd
Priority to CN202010598370.5A priority Critical patent/CN111818139B/en
Publication of CN111818139A publication Critical patent/CN111818139A/en
Application granted granted Critical
Publication of CN111818139B publication Critical patent/CN111818139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a wireless heterogeneous control computing system based on a neural network, which comprises: the mobile intelligent terminal comprises: the required real-time acquisition and communication are independently completed, complete neural network calculation and an edge calculation terminal with various modes of wireless communication are realized, and a neural network calculation guarantee without calculation burden and various data source storage without storage burden are provided for a host control terminal for synchronization and updating; the host control terminal: the identification, processing and bidirectional execution of the control instruction are carried out, data sent by the intelligent terminal are obtained through a network and participate in the cooperative calculation of the terminal neural network, the indispensable and dual calculation guarantee is achieved, the asymmetric operator can be started at the host control terminal to ensure that the mobile intelligent terminal cannot bypass or copy the peripheral neural network calculation which is mainly controlled by the host controller and drive the control equipment.

Description

Wireless heterogeneous control computing system based on neural network
Technical Field
The invention relates to the technical field of heterogeneous network computing, in particular to a wireless heterogeneous control computing system based on a neural network.
Background
Man-machine remote control is one of the modes facilitating operation and control, and plays an important role in providing various off-line face-to-face exclusive services such as directional self-service, robot, identity recognition entrance guard, non-contact electronic lock, non-inductive recognition preference recommendation and the like. However, if all the devices need to have the calculation task of the complete artificial intelligence neural network, the devices are bound to collect part of original client data, and certain requirements and constraints are brought to the requirements of user privacy, calculation power consumption and physical size, so that the wide application of artificial intelligence is possibly limited.
For example, when an intelligent camera is installed on a directional self-service instrument, a consumer needs to give out personal sign feature information (face, iris, fingerprint, voice, expression, other sign traits), which is a personal information cost for obtaining a single intelligent service. Meanwhile, the self-service instrument needs to be additionally provided with a camera, a sensor and power supply electricity, the storage and the network are matched with intelligent calculation, and the size and the cost of the equipment are correspondingly increased.
For another example, a service robot has increased hardware and neural network calculations for more intelligent services, but also has a corresponding increase in volume, weight, and energy consumption, and also has a comparable increase in manufacturing cost, and if the robot still maintains a miniature volume, it is subject to power supply, cameras, sensors, and more objective components. If there are more service robots in an area, there is a duplicate hardware cost for each service robot.
In a scene such as a non-contact door access and an electronic lock with high security level and high reliability requirement, because more larger intelligent computing components are added to the intelligent hardware equipment, power consumption and cost are reduced, and physical size restricts the application range, originally, a small key space becomes a space of a small computer, originally, the use without wiring power supply or batteries becomes that circuits are increased, or the power consumption of the batteries is optimized to ensure that no energy fault exists within a period of several months, and in practice such as outdoors and motor vehicle locks, the hardware components are subject to harsh specifications and safety requirements.
The scenes and the application forms have strong requirements on artificial intelligence, particularly a neural network computing link, also have strict requirements on a pure marginal neural network, and test ecological enterprises and users with wide artificial intelligence application fields, which is the background of the problem to be solved by the patent, namely, partial computation of the neural network is realized by adopting a heterogeneous computation method through wireless communication to achieve cooperative intelligence, and each device is not allowed to directly obtain original user data.
Aiming at the defects in the prior art, the technical problems to be solved at present are shown in the following points:
1. the calculation and imaging hardware in the stock market, particularly the personal smart phone, the smart watch and other strong calculation capabilities are adopted, and the camera pickup, the network communication and other sensing equipment are equipped to realize acquisition and pre-network calculation, so that the calculation task needs to be divided from the neural network to provide different heterogeneous subjects, and the continuity can be ensured.
2. The calculation tasks are divided according to the current calculation service, so that the high concurrency of public equipment is realized, the high adaptability of private equipment is realized, and the high confidentiality of the private equipment is realized.
3. More heterogeneous proprietary computing devices can also provide for allocation of shared idle computing resources and schedule permutation computing tasks in conjunction with wireless communication features.
4. Who initiates, who organizes, who coordinates and coordinates, and who controls all can self-adaptation dynamic negotiation accomplish.
5. Partial calculation of the neural network achieves cooperative intelligence by adopting a heterogeneous calculation method through wireless communication, each device does not directly acquire original user data, and an intermediate link of heterogeneous calculation is safer and more convenient than encryption transmission and storage of original image data.
Patent document CN201810646003.0 discloses a training method and device for accelerating a distributed deep neural network, wherein the method comprises the following steps: based on parallel training, designing the training of the deep neural network into a distributed training mode, and dividing a deep neural network model to be trained into a plurality of sub-networks; dividing a training sample set into a plurality of sub sample sets; based on a distributed cluster architecture and a preset scheduling method, a plurality of sub-sample sets are utilized to train the deep neural network, each training is carried out by a plurality of sub-networks simultaneously, and then the distributed training of the deep neural network is completed; because the influence of network delay on the sub-networks of distributed training can be locally reduced through data based on the distributed cluster architecture and the preset scheduling method, the training strategy is adjusted in real time, the progress of the sub-networks of parallel training is synchronized, the time for completing the training of the distributed deep neural network can be shortened, and the training of the deep neural network is accelerated.
The technical points are compared: the patent of contrast adopts the distributed to try the training link, accelerates sub-network training, and the target is acceleration, and the link is training, and minimum dynamics is the sub-network. The objective of the patent is reasoning and deduction, and training is not related, the objective is to complete the media acquisition capability which does not have complete operation neural network calculation at the beginning of design by means of hardware characteristics on different media, speed may be sacrificed on task segmentation, redundant calculation, consistency calculation verification, negotiation detection and the like, and the minimum segmentation strength is supported to be final tail-based mathematical calculation instead of sub-network level division.
Patent document cn201811242301.x discloses a distributed face recognition door lock system based on a convolutional neural network, which comprises a raspberry group main control end and a neural network recognition server slave control end; the raspberry group main control end comprises: the system comprises a main operation logic module, a user management and data set preparation module and an instruction and communication management module, wherein a raspberry group main control end is used for lock state control, image acquisition, data preparation and preprocessing, transmitting an instruction to a neural network identification server for work, and inputting an infrared signal to the main operation logic module through an infrared sensor; the neural network recognition server includes: the system comprises a neural network training module, an identification and judgment module and a main service logic module, wherein a neural network server serves as a distributed slave control end, receives an instruction of a raspberry master control end, and judges and replies sent data.
The technical points are compared: the patent of the opposite side is a pure one-step completion on the calculation of the neural network, no distributed heterogeneous neural network calculation design exists, the wireless technology is used for result communication instead of negotiation and cooperation among network layer calculation, and the controller and the scanning identification module are driven passively based on a control signal of a calculation result, which are different from the patent of the application, but the patent aims are to realize the driving of intelligent hardware through the neural network.
Patent document CN201410067916.9 discloses a language model training method based on a distributed neural network and a system thereof, the method comprising: splitting the large word list into a plurality of small word lists; each small word list corresponds to a neural network language model, and the input dimensionality of each neural network language model is the same and is independently trained for the first time; combining the output vectors of the neural network language models and carrying out second training; and obtaining a normalized neural network language model. The system comprises: the training system comprises an input module, a first training module, a second training module and an output module.
The technical points are compared: compared with the patent, the local computing capacity and efficiency are expected to be improved by blocking large and complex contents to be identified, the purpose of the patent is to achieve the purpose that the problems to be solved and the size of a data set are all achieved through the cooperation of the first medium and the third medium, but the role of the second medium belongs to the consistency neural network verification of peripheral nerves, so that the second medium with limited original capacity is added to the participants of the neural network, the accuracy and the reliability of the neural network computing can be enhanced, and the potential safety hazard of intelligent communication identification of a wireless controller can be reduced.
Patent document cn201910693037.x discloses a distributed deep reinforcement learning based on a fusion neural network parameter. The method comprises the following steps: (1) deploying a deep reinforcement learning agent on each working node; (2) at regular intervals, all the working nodes send respective neural network parameters and the currently obtained average return to a parameter server; (3) the parameter server receives the neural network parameters and the average return sent by all the working nodes; (4) the parameter server calculates a parameter coefficient corresponding to each working node according to the average return; (5) the parameter server calculates new neural network parameters according to all the neural network parameters and the parameter coefficients thereof; (6) all working nodes start learning using this new neural network parameter.
The technical points are compared: the patent of our part is the intentional division of neural network calculation, but not for the purpose of cooperatively enhancing acceleration, and is applicable to algorithms of various neural networks, including reasoning calculation processes of deep learning, enhanced learning, incremental learning and the like, but does not relate to a training link. The three media of the participants of the patent of our party are clearly divided, the same kind of calculation tasks are not developed in parallel, and the roles of the participants are different.
Patent document CN201711319211.1 discloses a full-scale distributed full-brain simulation system based on a brain-like impulse neural network, which aims to solve the problems of lack of multi-scale and multi-scale modeling methods and coupling of modeling and simulation in brain-like simulation. The specific implementation mode of the system comprises the following steps: the system comprises a user modeling layer unit, a model layer unit, a middle abstract layer unit and a simulation layer unit, wherein in the user modeling layer unit, a user can use a modeling script language to model based on a whole brain model. In the model layer unit, the system stores the built-in model and the user built model and converts the built-in model and the user built model into intermediate abstraction. At the intermediate abstraction layer unit, the system assembles the intermediate abstractions and converts to a runtime format. And in the simulation layer unit, the system reads the run-time format simulation operation and interacts with the user in real time.
The technical points are compared: the research and the deepening of the field of the neural network per se do not conflict with the heterogeneous neural network of the patent, namely, a manner of divisible and partitioned blocks can be found in the calculation of the novel and varied neural network to be used in the design of the patent.
Disclosure of Invention
In view of the defects in the prior art, the invention aims to provide a wireless heterogeneous control computing system based on a neural network.
The invention provides a wireless heterogeneous control computing system based on a neural network, which comprises:
the mobile intelligent terminal comprises: the required real-time acquisition and communication are independently completed, complete neural network calculation and an edge calculation terminal with various modes of wireless communication are realized, and a neural network calculation guarantee without calculation burden and various data source storage without storage burden are provided for a host control terminal for synchronization and updating;
the host control terminal: the identification, processing and bidirectional execution of the control instruction are carried out, data sent by the intelligent terminal are obtained through a network and participate in the cooperative calculation of the terminal neural network, the indispensable and dual calculation guarantee is achieved, an asymmetric operator can be started at the host control terminal to ensure that the mobile intelligent terminal cannot bypass or copy the peripheral neural network calculation which is dominated by the host control terminal and drive the control equipment;
elastic co-computation clustering: the method mainly provides data source expansion and instruction updating services for the host control terminal, and also provides elastic neural network calculation for the mobile intelligent terminal, and the expansion mode of the elastic neural network calculation is transparent to the host controller and is completed only by the mobile intelligent terminal.
Preferably, a wireless heterogeneous input/output module WNIO and a neural network negotiation and neural network calculation module NNNC are arranged on the mobile intelligent terminal;
the wireless heterogeneous input and output module WNIO: determining whether to need to utilize an uplink elastic co-computation cluster according to the type of the acquired image; in order to improve the calculation and response efficiency, a network model and calculation assistance do not need to be obtained through uplink, and the calculation task is directly completed locally;
the neural network negotiation and neural network calculation module NNNC: collecting and storing basic data and carrying out normalized preprocessing on the data; then, a host control terminal which is most suitable for the current service can be loaded into the mobile intelligent terminal in cooperation with the neural network according to the algorithm selection of the application service; and then, performing multi-layer calculation division of the neural network.
Preferably, the elastic co-computing cluster comprises: public clouds, private clouds, and in distributed proprietary computing devices.
Preferably, the basic data includes: photos, pictures, speech, fingerprints, irises, special expressions and facial features, and other data collected with feature recognition
The normalized preprocessing refers to: and performing normalization processing and vector matrixing processing on the application service according to the value range of 0-255, wherein the algorithm comprises the following steps: neural network classification and feature extraction algorithms;
the sources of the loading include: and the mobile intelligent terminal is locally and flexibly cooperated with the computing cluster.
Preferably, the determining whether or not to need to compute a cluster by means of an elastic co-computation method connected to the above is performed according to the type of the acquired image:
and (3) not performing uplink on the A type image of the small feature recognition, wherein the A type image comprises: human face, voice, oscillogram, gesture and fingerprint, and the size is not more than 512x 512;
the image with the type A is updated to the B type image by supplementing the data with the external same type identification;
the C-type image includes: the size of the image with the characteristics to be identified exceeds 512x512, the target object is an object with large and preset volume, and public data with privacy and confidentiality requirements are provided;
the up-link is made for B or C type images.
Preferably, the host control terminal refers to: a lower computer device actually used;
the host control terminal: and carrying out communication key verification, peripheral nerve matching and control of function triggering.
Preferably, the elastic co-computing cluster:
when the host control terminal needs to calculate and store data in a large area, the data are dynamically loaded and allocated through the mobile intelligent terminal, and the data are used as a back data warehouse and an algorithm support of the host control terminal and a calculation cooperation support of the mobile intelligent terminal.
Preferably, the elastic co-computing cluster:
in an extreme case 1, a mobile intelligent terminal reports transparently transmitted data and a calculation task to an elastic collaborative calculation cluster, and the rest of the data and the calculation task except a sensor acquisition task are completed by the elastic collaborative calculation cluster and comprise redundancy dual calculation guarantee of peripheral nerves;
in an extreme case 2, the flexible collaborative computing cluster network is unreachable or is not driven at all, and the mobile intelligent terminal waits for idling after having completed all the neural network computing requirements of the host control terminal.
Preferably, the elastic co-computation cluster is transparent to the host control terminal, invisible and directly connected, updating and iteration also occur on the elastic co-computation cluster, and the computation power and the terminal nerve computation form on the host control terminal comprise ultra-low pulse drive form power consumption and a hardware instruction set is unchanged for a long time;
the host control terminal and the mobile intelligent terminal are the main bodies of the calculation relationship, and the elastic co-calculation cluster does not directly participate in the calculation relationship, belongs to the expansion of the mobile intelligent terminal, and does not directly participate in business and algorithm.
Compared with the prior art, the invention has the following beneficial effects:
the beneficial effects of this patent application are that can rely on hardware and the mobile device of present computing power, embedded MCU provides the large tracts of land can fall to the ground extensible real-time towards individual consumer's artificial intelligence neural network computational service, to at indoor outdoor intelligent public facility, the electronic lock, service robot, the artificial intelligence of all kinds of sign discernments on the car is used, the application promotion that can both realize artificial intelligence of high-efficient quick and short period falls to the ground, reduce the hardware repetitive design who falls to the ground trade in-process, there is irreconcilable contradiction such as energy consumption and hardware ability space conflict in addition.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a schematic diagram of a wireless heterogeneous neural computation framework and software modules provided by the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The invention provides a wireless heterogeneous control computing system based on a neural network, which comprises:
the mobile intelligent terminal comprises: the required real-time acquisition and communication are independently completed, complete neural network calculation and an edge calculation terminal with various modes of wireless communication are realized, and a neural network calculation guarantee without calculation burden and various data source storage without storage burden are provided for a host control terminal for synchronization and updating;
the host control terminal: the identification, processing and bidirectional execution of the control instruction are carried out, data sent by the intelligent terminal are obtained through a network and participate in the cooperative calculation of the terminal neural network, the indispensable and dual calculation guarantee is achieved, an asymmetric operator can be started at the host control terminal to ensure that the mobile intelligent terminal cannot bypass or copy the peripheral neural network calculation which is dominated by the host control terminal and drive the control equipment;
elastic co-computation clustering: the method mainly provides data source expansion and instruction updating services for the host control terminal, and also provides elastic neural network calculation for the mobile intelligent terminal, and the expansion mode of the elastic neural network calculation is transparent to the host controller and is completed only by the mobile intelligent terminal.
Specifically, a wireless heterogeneous input and output module WNIO and a neural network negotiation and neural network calculation module NNNC are arranged on the mobile intelligent terminal;
the wireless heterogeneous input and output module WNIO: determining whether to need to utilize an uplink elastic co-computation cluster according to the type of the acquired image; in order to improve the calculation and response efficiency, a network model and calculation assistance do not need to be obtained through uplink, and the calculation task is directly completed locally;
the neural network negotiation and neural network calculation module NNNC: collecting and storing basic data and carrying out normalized preprocessing on the data; then, a host control terminal which is most suitable for the current service can be loaded into the mobile intelligent terminal in cooperation with the neural network according to the algorithm selection of the application service; and then, performing multi-layer calculation division of the neural network.
Specifically, the elastic co-computation cluster includes: public clouds, private clouds, and in distributed proprietary computing devices.
Specifically, the basic data includes: photos, pictures, speech, fingerprints, irises, special expressions and facial features, and other data collected with feature recognition
The normalized preprocessing refers to: and performing normalization processing and vector matrixing processing on the application service according to the value range of 0-255, wherein the algorithm comprises the following steps: neural network classification and feature extraction algorithms;
the sources of the loading include: and the mobile intelligent terminal is locally and flexibly cooperated with the computing cluster.
Specifically, the determining whether or not to need to utilize the connected elastic co-computation cluster according to the category of the acquired image includes:
and (3) not performing uplink on the A type image of the small feature recognition, wherein the A type image comprises: human face, voice, oscillogram, gesture and fingerprint, and the size is not more than 512x 512;
the image with the type A is updated to the B type image by supplementing the data with the external same type identification;
the C-type image includes: the size of the image with the characteristics to be identified exceeds 512x512, the target object is an object with large and preset volume, and public data with privacy and confidentiality requirements are provided;
the up-link is made for B or C type images.
Specifically, the host control terminal refers to: a lower computer device actually used;
the host control terminal: and carrying out communication key verification, peripheral nerve matching and control of function triggering.
Specifically, the elastic co-computing cluster:
when the host control terminal needs to calculate and store data in a large area, the data are dynamically loaded and allocated through the mobile intelligent terminal, and the data are used as a back data warehouse and an algorithm support of the host control terminal and a calculation cooperation support of the mobile intelligent terminal.
Specifically, the elastic co-computing cluster:
in an extreme case 1, a mobile intelligent terminal reports transparently transmitted data and a calculation task to an elastic collaborative calculation cluster, and the rest of the data and the calculation task except a sensor acquisition task are completed by the elastic collaborative calculation cluster and comprise redundancy dual calculation guarantee of peripheral nerves;
in an extreme case 2, the flexible collaborative computing cluster network is unreachable or is not driven at all, and the mobile intelligent terminal waits for idling after having completed all the neural network computing requirements of the host control terminal.
Specifically, the elastic co-computation cluster is transparent to a host control terminal, invisible and directly connected, updating and iteration also occur on the elastic co-computation cluster, and computation power and terminal nerve computation forms on the host control terminal include ultra-low pulse drive form power consumption and a hardware instruction set is unchanged for a long time;
the host control terminal and the mobile intelligent terminal are the main bodies of the calculation relationship, and the elastic co-calculation cluster does not directly participate in the calculation relationship, belongs to the expansion of the mobile intelligent terminal, and does not directly participate in business and algorithm.
The present invention will be described more specifically below with reference to preferred examples.
Preferred example 1:
the computing method is totally dependent on four main software modules and three types of physical media, wherein the third medium on the graph is an elastic role, the other two media are rigid and must participate in the computing method, the following graph I can see the whole heterogeneous computing framework and the relationship between the three media and the contained four main software modules.
1. One category of media includes, but is not limited to, smart phones, watches, PDAs, smart mobile digital terminals, which require sensors, processors, storage and networks, especially two or more types of communication networks to be available for simultaneous uplink and downlink. On the first medium, software modules which are internally operated comprise a Wireless heterogeneous Input-Output module (WNIO) and a Neural network Negotiation and Neural network computing module (NNNC)
1.1 WNIO as a first software module undertakes to decide whether [ A is not on, B/C is on ] needs to be calculated by means of the connected elastic cooperation cluster according to business needs, and the cluster can be a public cloud, a private cloud or a specific distributed special computing device. In general, to improve computation and response efficiency, the computation task is directly completed locally without obtaining a network model and computation assistance through an uplink. Wherein, according to the type of the obtained image (A which is not connected is small feature recognition: human face, voice, oscillogram, gesture, fingerprint, etc., and the size is not more than 512x512) (the type is upgraded to B if the data with external same type identification is supplemented, and the type is the content of A) (C is other image with feature to be identified with size more than 512x512 or the target object is a large-volume object or public data with privacy requirement)
1.2 different communication protocols are adopted for file transmission, neural network calculation task transmission and instruction transmission, particularly, the transmission of the neural network calculation task needs sample values and arrays of a calculation matrix, and network model parameters are communicated in a message mode, so that the requirement on a system network, particularly BLE and NFC communication, can be reduced to the maximum extent by means of a wireless communication channel. The neural networks have different algorithms, although the neural networks are heterogeneous, the key point of the heterogeneous network lies in playing the role of the central role of the medium I from acquisition to preprocessing to storage to network multilayer calculation, the last election link of the neural calculation is synchronized to the tail end and is completed by the medium II, and the purpose of the optimization is to maximally match and form a heterogeneous relation body, so that the data security is ensured, and the directivity is stronger, and the acquisition of the tail end and the pressure of complex network calculation are reduced. The expansion of elastic computing is a necessary consideration for multi-machine collaboration scenarios and the increasing of storage space. The control instruction is not directly sent to the medium I, or the medium II can be directly controlled by the instruction and loses initiative, which is an important invention point of the patent.
1.3 the negotiation and matching information of the down connection (connected with the terminal controller, namely the connection process of the medium two is the down connection) can be stored, and the negotiation and matching can be restarted only after any one of the two parties is reset, emptied and failed in multiple matching, so that the searching, negotiation and pairing time is saved. The data mainly exists on media I and media III except the control terminal, but in order to ensure that high-speed pairing and identification indexes of a pre-calculation task exist between the media I and the media II, certain identification and index information needs to be stored on the media I and the media II, and the data volume is not large. If either medium resets, or powers down, updates firmware, etc., it requires renegotiation and new identification and index generation, or there is no way to efficiently coordinate computations
1.4 the negotiation and matching information with the uplink (the process of connecting with medium three becomes the uplink, no matter medium two connects medium three through medium one, or medium one connects medium three directly, all are called the uplink), can restart the negotiation matching only after any one of the two parties resets, clears and fails to match for many times in storage, saves the search, negotiation and pairing time. But when the communication is not accessible or the elastic proprietary cluster is not available, the calculation is directly carried out on the first medium.
2. The NNNC serving as the second software module is mainly used for collecting basic data (wherein the basic data comprises photos, pictures, voice, fingerprints, irises, special expressions, facial features and other collected data with characteristic identification), storing and performing normalized preprocessing (performing normalized processing and vector matrixing processing according to a value range of 0-255). Then, according to the algorithm of the application service (the invention is compatible with the neural network classification and the feature extraction algorithm on the market at present, such as the convolution classification, the sampling class, the addition, subtraction, multiplication, division calculator class, the multidimensional array sorting class and the like), the medium II which is most suitable for the current service is loaded into the medium I (the loading source can be the medium I, and can also be from the medium III) together with the neural network. Then, the multi-layer calculation division of the neural network is performed (the hierarchical calculation pipeline of the neural network provides a mechanism for realizing data one layer at a time and each layer of operator algorithm is different, so that convenience of distributing different calculation layers on different hardware units to obtain design considerations such as efficiency, capacity, safety, logic requirements and the like can be provided, for a terminal controller, the calculation capacity under certain conditions is met, the difference of the number of calculation bits, modes and capacities of each body also determines the division in the peripheral network, the closer the neural network is to the periphery, the lower the complexity of data quantity and data calculation is, but the deeper the neural network layers associated with the front and the upper multi-layer layers are), and the simplest neural classification network is used for demonstration here: the first step of the calculation process of the complete neural network algorithm includes five steps, the first step is to perform vector matrix conversion on a data set to form a basic data table to be processed, the second step is to perform sampling and select classified data element vectors of a proper algorithm, the third step is to perform convolution of the vectors and the matrix by using a mathematical calculator (such as an addition and subtraction method), the fourth step is to perform sampling and matrix normalization once again, and the fifth step is to calculate the weight of representative data with a front rank in the final data set by using a classifier.
This exemplary neural network calculation, according to the calculation capability and category of medium three, may put the third, fourth, and fifth steps on medium two, each time the calculation feedback of medium two is different from the result of the continuous execution on medium one, it is considered incorrect, and it is recalculated, if N times of continuous failures (the tolerance value of N is determined by the algorithm and the application service), it is considered that the network calculation has failed, if the calculation synchronization time differs from the tolerance value of the corresponding algorithm and service, the negotiation changes the third step back to the calculation on medium one, and the fourth and fifth steps are still put on medium two for calculation, and the same round trip. Until if the consistency result cannot be successfully realized by slicing in the fifth step, namely the last step, the thorough calculation is considered to fail, if the consistency calculation can be completed in the middle step, the state of the algorithm and the algorithm slicing is recorded in the medium, and the negotiated heterogeneous distributed method is used next time. If the algorithm has more steps, the consistency verification calculation on the double media is started from the last M step, usually, the value of M is [2,5], the calculation on the medium II which seems to be redundant does not accelerate the efficiency of the neural network calculation, but the performance is sacrificed, but the method is greatly compatible with the mainstream neural network algorithm on the market at present, the data set and the calculation method of the last step tend to be in the relation of fitting and basic calculation, the comparison accords with the medium II with low calculation complex design capability, and because the various kinds of the medium II are different, the selection, negotiation, switching and storage are finished by the cooperation of the medium I and the medium III.
3. The medium II and the medium I belong to necessary medium composition links, and the medium II contains a software module III. The second medium is actually used lower computer equipment, such as a lock head of an intelligent door lock, a robot body of the sweeping robot, namely physical equipment and enabling hardware equipment at the extreme edge. Usually, the computing power of the medium two is very weak or does not have the computing power of the parallel neural network, but the medium two can be used as a co-processing link of peripheral nerve computing. Therefore, the main purpose of selecting the medium II as one of the neural network links is not to calculate load, but task reliability and continuity, replaceability and initiative are not lost, and after all, the medium II bears a very important safety control link. And the problems of interference, weak safety and the like of a wireless network are resisted by a mode of carrying out calculation of a network layer. The medium two does not need to have the hardware computing power and comprehensive hardware composition units of the medium one, and does not need to be greatly reformed in hardware composition and size, and meanwhile, the medium one can be simultaneously connected with a plurality of medium two, and the redundant repeated consumption of respective network computing of a plurality of single devices in artificial intelligence application is broken. By means of the first medium and the second medium, the intelligent terminal with the neural network computing capability can be rapidly upgraded.
4. The third medium is an elastically expanded physical medium and internally provided with a software module IV, and the dynamic loading and allocation are mainly completed through the first medium when the second medium needs to calculate and store data by means of a large area and in a complex way, and the dynamic loading and allocation comprise different neural network models (the models which are suitable for the multilayer convolution of the existing neural network are all suitable, if the tail end is not of the multilayer convolution or calculator type, the models are not suitable), while the mobile terminal of the first medium does not have all network model algorithms and data element sets, is used as a data warehouse and an algorithm support behind the second medium, and is matched with the calculation of the first medium, but for the second medium, the third medium is transparent, invisible and can not be directly connected (the second medium and the first medium are main bodies of the calculation relationship, while the third medium does not directly participate in the calculation relationship, belongs to the expansion of the first medium, and does not directly participate in the algorithm, from the perspective of the edge end of the medium two, the medium three is not existed), updating and iteration also occur on the medium three, and the computational capability and the terminal nerve computational morphology on the medium two can be ultra-low pulse drive form power consumption and the hardware instruction set is unchanged for a long time.
The implementation mode 1 is that software with artificial intelligence recognition and neural network computing power is operated on a personal intelligent terminal to coordinate with the computing power upgrade of peripheral intelligent hardware.
And in the embodiment 2, signs are recognized on the smart phone and the watch to realize the matching response of other wirelessly connected devices, so that the group intelligence of a large area of multiple devices is realized.
The implementation mode 3 is that the artificial intelligence recognition can be carried out on the personal intelligent equipment by unlocking, opening the door and identifying the public area, and personal physical sign characteristic information does not need to be left on a non-personal mobile intelligent terminal.
The implementation example 4 is that a piece of intelligent hardware such as voice control, face control and sign control is released, a camera and a sensor do not need to be equipped, and the release can be completed by a mobile phone instead as long as wireless communication is supported and corresponding mobile phone software is provided.
In the description of the present application, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present application.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (6)

1. A wireless heterogeneous control computing system based on a neural network, comprising:
the mobile intelligent terminal comprises: the required real-time acquisition and communication are independently completed, complete neural network calculation and an edge calculation terminal with various modes of wireless communication are realized, and neural network calculation guarantee and various data source storage, synchronization and updating are provided for a host control terminal;
the host control terminal: distinguishing, processing and performing in two directions of the control instruction, acquiring data sent by the intelligent terminal through a network and participating in peripheral neural network co-calculation, and starting an asymmetrical operator at the host control terminal to ensure that the mobile intelligent terminal cannot bypass or copy peripheral neural network calculation participated by the host control terminal and drive the control equipment; the host control terminal refers to: a lower computer device actually used; the host control terminal: carrying out communication key verification, peripheral nerve matching and control of function triggering; participating in a calculation link of a neural network, and completing the last election link of the neural calculation by a host control terminal;
elastic co-computation clustering: the method mainly provides data source expansion and instruction updating services for a host control terminal, and also provides elastic neural network calculation for a mobile intelligent terminal, wherein the expansion mode of the elastic neural network calculation is transparent to a host controller and is only completed by the mobile intelligent terminal; the elastic co-computation cluster: when the host control terminal needs to calculate and store data, the host control terminal dynamically loads and allocates the data through the mobile intelligent terminal, and the data are used as a back data warehouse and an algorithm support of the host control terminal and are matched with the mobile intelligent terminal in a calculation and cooperation manner; in an extreme case 1, a mobile intelligent terminal reports transparently transmitted data and a calculation task to an elastic collaborative calculation cluster, and the rest of the data and the calculation task except a sensor acquisition task are completed by the elastic collaborative calculation cluster and comprise redundancy dual calculation guarantee of peripheral nerves; in an extreme case 2, the elastic cooperative computing cluster network is unreachable or is not driven at all, and the mobile intelligent terminal is idle for waiting after finishing the calculation requirements of all the neural networks matched with the host control terminal; elastically expanding a physical medium by an elastic collaborative computing cluster;
the mobile intelligent terminal, the host control terminal and the elastic cooperative computing cluster do not carry out the same kind of computing tasks in parallel.
2. The wireless heterogeneous control computing system based on the neural network as claimed in claim 1, wherein a wireless heterogeneous input and output module WNIO and a neural network negotiation and neural network computing module NNNC are arranged on the mobile intelligent terminal;
the wireless heterogeneous input and output module WNIO: determining whether to need to utilize an uplink elastic co-computation cluster according to the type of the acquired image; in order to improve the calculation and response efficiency, a network model and calculation assistance do not need to be obtained through uplink, and the calculation task is directly completed locally;
the neural network negotiation and neural network calculation module NNNC: collecting and storing basic data and carrying out normalized preprocessing on the data; then, a host control terminal which is most suitable for the current service can be loaded into the mobile intelligent terminal in cooperation with the neural network according to the algorithm selection of the application service; and then, performing multi-layer calculation division of the neural network.
3. The neural network-based wireless heterogeneous control computing system of claim 1, wherein the elastic co-computing cluster comprises: public clouds, private clouds, and in distributed proprietary computing devices.
4. The neural network-based wireless heterogeneous control computing system of claim 2, wherein the base data comprises: photos, pictures, voice, fingerprints, irises, special expressions and facial features;
the normalized preprocessing refers to: carrying out normalization processing and vector matrixing processing according to the value range of 0-255
The algorithm of the application service is as follows: neural network classification and feature extraction algorithms;
the sources of the loading include: and the mobile intelligent terminal is locally and flexibly cooperated with the computing cluster.
5. The neural network-based wireless heterogeneous control computing system according to claim 2, wherein the decision of whether or not to utilize an uplink elastic co-computing cluster is determined according to the type of the acquired image:
and (3) not performing uplink on the A type image of the small feature recognition, wherein the A type image comprises: human face, voice, oscillogram, gesture and fingerprint, and the size is not more than 512x 512;
the image with the type A is updated to the B type image by supplementing the data with the external same type identification;
the C-type image includes: the size of the image with the characteristics to be identified exceeds 512x512, the target object is an object with large and preset volume, and public data with privacy and confidentiality requirements are provided;
the up-link is made for B or C type images.
6. The wireless heterogeneous control computing system based on the neural network according to claim 1, wherein the elastic co-computing cluster is transparent to the host control terminal, invisible and not directly connected, updating and iteration also occur on the elastic co-computing cluster, and the computing power and peripheral nerve computing form on the host control terminal comprise ultra-low pulse-driven form power consumption and hardware instruction set is unchanged for a long time;
the host control terminal and the mobile intelligent terminal are the main bodies of the calculation relationship, and the elastic co-calculation cluster does not directly participate in the calculation relationship, belongs to the expansion of the mobile intelligent terminal, and does not directly participate in business and algorithm.
CN202010598370.5A 2020-06-28 2020-06-28 Wireless heterogeneous control computing system based on neural network Active CN111818139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010598370.5A CN111818139B (en) 2020-06-28 2020-06-28 Wireless heterogeneous control computing system based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010598370.5A CN111818139B (en) 2020-06-28 2020-06-28 Wireless heterogeneous control computing system based on neural network

Publications (2)

Publication Number Publication Date
CN111818139A CN111818139A (en) 2020-10-23
CN111818139B true CN111818139B (en) 2021-05-21

Family

ID=72855636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010598370.5A Active CN111818139B (en) 2020-06-28 2020-06-28 Wireless heterogeneous control computing system based on neural network

Country Status (1)

Country Link
CN (1) CN111818139B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381211B (en) * 2020-11-20 2023-04-28 西安电子科技大学 System and method for executing deep neural network based on heterogeneous platform
CN112381462B (en) * 2020-12-07 2024-07-16 军事科学院***工程研究院网络信息研究所 Data processing method of intelligent network system similar to human nervous system
CN114666812A (en) * 2020-12-24 2022-06-24 华为技术有限公司 Information processing method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101345619A (en) * 2008-08-01 2009-01-14 清华大学深圳研究生院 Electronic data protection method and device based on biological characteristic and mobile cryptographic key
CN109165523A (en) * 2018-07-27 2019-01-08 深圳市商汤科技有限公司 Identity identifying method and system, terminal device, server and storage medium
CN110223420A (en) * 2019-04-29 2019-09-10 广东技术师范学院天河学院 A kind of fingerprint unlocking system
CN110738778A (en) * 2019-09-27 2020-01-31 北京小米移动软件有限公司 control forbidding method, device, equipment and storage medium
WO2020113187A1 (en) * 2018-11-30 2020-06-04 Sanjay Rao Motion and object predictability system for autonomous vehicles

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108255605B (en) * 2017-12-29 2020-12-04 北京邮电大学 Image recognition cooperative computing method and system based on neural network
CN110994798A (en) * 2019-12-16 2020-04-10 深圳供电局有限公司 Substation equipment monitoring system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101345619A (en) * 2008-08-01 2009-01-14 清华大学深圳研究生院 Electronic data protection method and device based on biological characteristic and mobile cryptographic key
CN109165523A (en) * 2018-07-27 2019-01-08 深圳市商汤科技有限公司 Identity identifying method and system, terminal device, server and storage medium
WO2020113187A1 (en) * 2018-11-30 2020-06-04 Sanjay Rao Motion and object predictability system for autonomous vehicles
CN110223420A (en) * 2019-04-29 2019-09-10 广东技术师范学院天河学院 A kind of fingerprint unlocking system
CN110738778A (en) * 2019-09-27 2020-01-31 北京小米移动软件有限公司 control forbidding method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于协同边缘计算的人脸识别平台的研究与实现;莫砚汉;《中国优秀硕士学位论文全文数据库 信息科技》;20200215;正文第23-26页,第39-46页 *

Also Published As

Publication number Publication date
CN111818139A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
CN111818139B (en) Wireless heterogeneous control computing system based on neural network
US11138903B2 (en) Method, apparatus, device and system for sign language translation
JP7412847B2 (en) Image processing method, image processing device, server, and computer program
CN106453228B (en) User login method and system for intelligent robot
CN112989767B (en) Medical term labeling method, medical term mapping device and medical term mapping equipment
CN112785278A (en) 5G intelligent mobile ward-round method and system based on edge cloud cooperation
CN110033764A (en) Sound control method, device, system and the readable storage medium storing program for executing of unmanned plane
CN113191479A (en) Method, system, node and storage medium for joint learning
KR20200145078A (en) Artificial intelligence platform and method for providing the same
US20230028830A1 (en) Robot response method, apparatus, device and storage medium
CN115100563A (en) Production process interaction and monitoring intelligent scene based on video analysis
CN114610677A (en) Method for determining conversion model and related device
CN112037305B (en) Method, device and storage medium for reconstructing tree-like organization in image
CN113516167A (en) Biological feature recognition method and device
CN112989922A (en) Face recognition method, device, equipment and storage medium based on artificial intelligence
CN105225035A (en) A kind ofly realize the unified robot of E-Government
CN106997449A (en) Robot and face identification method with face identification functions
CN115016911A (en) Task arrangement method, device, equipment and medium for large-scale federal learning
CN113673476A (en) Face recognition model training method and device, storage medium and electronic equipment
CN114237861A (en) Data processing method and equipment thereof
Tsarov et al. Extended classification model of telemedicine station
CN114528893A (en) Machine learning model training method, electronic device and storage medium
CN205140005U (en) E -Government integrated system
CN116050548B (en) Federal learning method and device and electronic equipment
CN117390455B (en) Data processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant