CN111079698A - Method and device for recognizing tangram toy - Google Patents

Method and device for recognizing tangram toy Download PDF

Info

Publication number
CN111079698A
CN111079698A CN201911391896.XA CN201911391896A CN111079698A CN 111079698 A CN111079698 A CN 111079698A CN 201911391896 A CN201911391896 A CN 201911391896A CN 111079698 A CN111079698 A CN 111079698A
Authority
CN
China
Prior art keywords
tangram
recognition
image
model
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911391896.XA
Other languages
Chinese (zh)
Inventor
卓迎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xiaoma Zhiqu Technology Co Ltd
Original Assignee
Hangzhou Xiaoma Zhiqu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xiaoma Zhiqu Technology Co Ltd filed Critical Hangzhou Xiaoma Zhiqu Technology Co Ltd
Priority to CN201911391896.XA priority Critical patent/CN111079698A/en
Publication of CN111079698A publication Critical patent/CN111079698A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a method and apparatus for recognizing a jigsaw puzzle. The method comprises the following steps: intercepting an image area of a camera containing an effective tangram; (ii) taking the intercepted image as an input to a tangram recognition model, the tangram recognition model being a deep convolutional neural network trained for tangram recognition, the output of the tangram recognition model indicating the pixels in the image of all tangrams contained in the input image and the type of tangram to which each pixel corresponds; each of the tangram pixel-level regions contained in the image is identified based on the output of the tangram identification model. By applying the method, the accuracy of the tangram recognition can be effectively improved, and meanwhile, good recognition results can be obtained under different illumination conditions and different background environments.

Description

Method and device for recognizing tangram toy
Technical Field
The present disclosure relates to the field of image recognition of jigsaw puzzle toys, and more particularly, to a jigsaw puzzle recognition method and apparatus based on a convolutional neural network.
Background
The jigsaw puzzle has great help for developing the logical thinking ability, the space imagination ability and the like of children, but the children can gradually and deeply recognize the skill and the fun of the jigsaw puzzle only by needing professional training institutions or the professional guidance of parents, most of the children contact the jigsaw puzzle, only simply play the children and enter the door, and the children are restrained after the difficulties are solved.
In order to solve the problems, the video of the seven-piece puzzle toy played by the child can be acquired in real time through a device with a camera and a computing unit, the process that the child operates the seven-piece puzzle toy is identified and analyzed through the video, when the child encounters a problem which cannot be solved, corresponding prompts and guidance can be provided for the child through a screen or voice, the child is helped to complete the task that the seven-piece puzzle is spliced into a specified pattern, and the child is helped to gradually explore the skill of the seven-piece toy and cultivate corresponding capacity.
The core of the above solution is an image recognition algorithm for the jigsaw puzzle, and the conventional computer vision algorithm can realize the above functions (for example, by converting an image into an HSV color image and recognizing different jigsaw puzzle modules by seven color filtering of the jigsaw puzzle), but the robustness and compatibility are poor, and the effect and experience in practical application are not satisfactory. For example, under the illumination conditions such as strong light or shadow, the recognition effect drops sharply, and for example, the recognition effect is greatly affected when different desktop backgrounds and cameras of different models exist.
Disclosure of Invention
In view of the above, the present disclosure provides a method for recognizing a jigsaw puzzle with high robustness and high recognition accuracy, and also provides a corresponding apparatus.
According to an aspect of the present disclosure, there is provided a tangram recognition method based on a convolutional neural network, the method including: intercepting a middle area in an image in a real-time video; the intercepted image is used as an input to a tangram model, which is a deep convolutional neural network trained to recognize a tangram toy, the output of which indicates the pixels in the image of all the tangrams contained in the input image and the type of tangram to which each pixel corresponds.
According to another aspect of the present disclosure, a scheme is provided in which a convolutional layer and a void convolutional layer are used to replace a fully connected layer in a classification network, and a corresponding anti-convolutional layer-based decoder structure is added after a final feature extraction layer, so that a network model can output a pixel-level recognition result.
According to another aspect of the present disclosure, a structure of void convolution is provided to reduce the calculation amount and parameter amount of the recognition model, and the resolution of the feature map is maintained while the receptive field is increased, so that the recognition model can accurately recognize the tangram with different dimensions; that is, the recognition model can be accurately recognized regardless of whether the jigsaw puzzle is located close to the camera or away from the camera.
According to another aspect of the present disclosure, a solution is provided for increasing the accuracy of small target jigsaw puzzle recognition using multi-scale feature map upsampling and post-fusion.
According to another aspect of the present disclosure, a scheme for reducing the resolution of an input image in an initialization portion of a recognition model is provided, so as to remove visual redundancy, reduce the amount of computation, and increase the recognition speed.
According to another aspect of the present disclosure, a method for reducing the size of a decoder is provided, so as to reduce the network model and achieve the effects of reducing the amount of computation and increasing the recognition speed.
According to another aspect of the present disclosure, a method for factoring a standard convolution operation is provided, so as to reduce the amount of computation and parameters of a network model, thereby achieving an effect of speeding up recognition.
According to another aspect of the present disclosure, there is provided a jigsaw puzzle recognition apparatus based on a convolutional neural network, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method. The camera is used for collecting the video of the toy;
according to aspects of the present disclosure, image pixels and corresponding pixels of all tangram included in an input image are identified as a tangram type of that color based on a deep convolutional network specifically used for tangram identification, and a Conditional Random Field (CRF) is employed as a post-process of a model, thereby improving the accuracy of the model in identifying edges of tangram images.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate the disclosure together with the description
Exemplary embodiments, features and aspects, and serve to explain the principles of the present disclosure.
Figure 1 illustrates a method of tangram recognition based on a convolutional neural network according to one embodiment of the present disclosure.
FIGS. 2-7 show schematic diagrams of one embodiment according to the present disclosure.
Fig. 8 shows a block diagram of the structure of a tangram recognition device based on a convolutional neural network according to one embodiment of the present disclosure.
Fig. 9 shows a block diagram of the structure of a tangram recognition device based on a convolutional neural network according to one embodiment of the present disclosure.
The specific implementation mode is as follows:
various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Figure 1 illustrates a method of tangram recognition based on a convolutional neural network according to one embodiment of the present disclosure. The method can be applied to terminals such as smart phones, tablets, smart robots, smart televisions, projectors, computers, and the like. As shown in fig. 1, the method includes the following steps 102, 104, and 106.
And 102, intercepting an image in a first area in the real-time video of the camera.
The first area may be set according to actual conditions. For example, in the case of adding a reflector to a tablet or a front camera of a mobile phone, images of nine-tenth of the lower side of the retained image can be captured, and the image reflected to the screen itself can be removed.
In one possible implementation, the intercepted image may also be processed, such as resizing and pixel value normalization, to match the processed image with the requirements of the convolutional neural network on the input, hereinafter, to further improve the recognition accuracy.
Step 104, taking the intercepted image as an input to a tangram recognition model, which is a deep convolutional neural network trained for tangram recognition, the output of which indicates the pixels in the image of all tangrams contained in the input image and the type of tangram to which each pixel corresponds.
Referring to fig. 2, a schematic view of a recognition model scene of a jigsaw puzzle according to an exemplary embodiment of the present disclosure is shown. The method trains the tangram recognition model at the server side and sends the trained recognition model to the terminal, so that the terminal can directly run the tangram recognition model and recognize each tangram image pixel and the corresponding type thereof in the image in real time.
Referring to FIG. 3, on the server side, an image containing all seven puzzle pieces is collected (step 302), the image edges of each puzzle piece are framed with polygons and labeled with the corresponding type name (step 304), and data enhancement is performed on the collected puzzle image samples and labels (step 306), then the samples are scaled to the same size (e.g., 576 x 3) (step 308), and pixel value normalization is performed (step 310) to obtain a training sample library. Model design and model training may then be performed based on the training sample library (step 312), resulting in the above-mentioned convolutional neural network that is dedicated to the recognition of the jigsaw puzzle, and generating a corresponding network model file that may recognize the jigsaw puzzle (step 314), which may include information such as the convolutional kernel, hierarchical structure, etc. of the convolutional neural network. The tangram recognition model file may be sent to a terminal.
Referring to fig. 4, on the terminal side, an image of a jigsaw puzzle observed in real time by a camera may be automatically acquired (step 402), and truncates the image portion that is unlikely to contain the tangram (step 404), scales it to the same size as the training sample (e.g. 576 x 3) (step 406), and normalizing the pixel values (step 408) to match the requirements of the tangram recognition model on the input, loading a tangram recognition model file to establish a neural network model before performing image detection (step 410), and performing tangram recognition (step 412) by using the normalized image (which can be understood as the data information corresponding to the image) as the input of the tangram recognition model to obtain the image pixel corresponding to each tangram contained in the image and which tangram type the corresponding pixel is (step 414).
It should be noted that the above is only an exemplary flow and is not intended to limit the context between steps, e.g., step 410 may be performed at any time before step 412.
Exemplary diagrams of the tangram recognition network model in this application scenario are given with reference to fig. 5 and 6, wherein:
a Conv layer (convolutional layer) that can learn the features of the jigsaw puzzle image hierarchically, wherein the lower convolutional layer learns the low-level features (e.g., edge lines, etc.) of the jigsaw puzzle image, the middle convolutional layer learns the combination of the low-level features, and the higher convolutional layer learns the more abstract features of the jigsaw puzzle image;
a BN layer (Batch Normalization layer) for preventing gradient disappearance in the training process and accelerating the training speed, and is arranged behind each convolution layer of the network model;
the ReLU represents an activation layer or a modified linear layer to remove irrelevant noise and accelerate the extraction speed of key factors;
DeConv denotes an deconvolution layer, which may amplify the Feature Map (Feature Map), and is equivalent to upsampling the Feature Map;
a scaled Conv (hole convolution layer) to increase the field of view of the center pixel of the convolution kernel (corresponding to the area visible by one convolution kernel).
Conv _ blockA is composed of a Conv convolution layer, a Bn batch normalization layer and a relu activation layer (step 502) for extracting feature information of the jigsaw puzzle image.
Conv _ blockB is composed of a scaled Conv hole convolution layer, a Bn batch normalization layer and a relu activation layer (step 504) and is used for extracting characteristic information of the tangram images with different scales.
Conv _ blockC is composed of a Deconv deconvolution layer, a Bn batch normalization layer, and a relu activation layer (step 506) to upsample the feature map so that the final output of the network model can be the result of a resolution consistent with the input image.
Step 508 inputs the 3-channel image scaled to 576 x 576 resolution into the tangram recognition model.
Step 510 is to perform convolution operation on the input image, the size of convolution kernel is 3x3, the number of convolution kernels is 16, a non-overlapping moving window with the size of 2 x 2 is adopted, 288 x 16 feature images are output, the input image is subjected to large-amplitude down-sampling in the initialization stage of the model, the subsequent calculated amount of the network model is greatly reduced, and on the basis that the identification precision of the model is not changed, the identification speed of the seven-piece puzzle model can be greatly improved. The reason is that the tangram image information has a large amount of redundancy, and the redundant information is greatly concentrated and the space size is reduced through the convolution operation of the layer.
The step 512, the step 514 and the step 516 form an encoder module of the network model, and the characteristic information of the tangram image is extracted in a layering way.
In steps 514 and 516, the hole convolutional layer is used to replace the normal convolutional layer, so as to increase the field of the central pixel of the convolutional kernel on the basis of unchanged resolution of the feature map. In the conventional image classification field, the field of view of the convolution kernel is generally increased by adding a posing down-sampling layer to the convolution layer, but this operation brings about a reduction in the size of the feature map, which has little influence on the image classification task because the output of the task only needs one-dimensional classification. However, for the task of segmenting the jigsaw puzzle image, it is necessary to output pixel-level classification information having the same size as the input image, and the reduction in the size of the feature map results in a large amount of information being lost in the up-sampling process. In the step, the hole convolution layer is adopted to solve the problem, so that the receptive field of the convolution kernel is increased on the basis of unchanging the resolution of the characteristic image.
Fig. 6 is an example of a hole convolution layer of the present model, where the size of the convolution kernel is 3X3, and the red dots represent convolution kernels with weights different from 0, where the hole is 1, the convolution kernel corresponding to 3 × 3 is expanded to 7 × 7 convolution kernels, and the field of the convolution kernels is increased to 7 × 7, so as to achieve the effect of increasing the field of the convolution kernels on the basis of unchanged resolution of the output feature map.
Step 518, step 520, step 522, step 524 and step 526 constitute the decoder module of the present network model, and the eigen-map generated in step 514 and step 516 is deconvoluted and up-sampled to obtain an eigen-map that is consistent with the resolution of the input image.
The eigenmap generated in step 516 is smaller in size, and learns more abstract feature information with high semantics, but the eigenmap is smaller in size, so that the information of the small-size tangram image features is lost in the up-sampling process (step 518, step 520 and step 522), and the recognition accuracy of the small-size tangram image is not high enough.
In order to solve the problems, a scheme of fusion after multi-scale feature mapping up-sampling is adopted, and the precision of small target tangram image recognition is increased. The eigen-image generated in step 514 is larger in size and more complete for the preservation of the eigen-information of the small sized jigsaw puzzle, and step 524 and step 526 perform deconvolution up-sampling on the eigen-image generated in step 514.
Step 528 fuses the eigenmaps of step 522 and step 526, fusing the results of the upsampling of the two size eigenmaps.
Step 530 performs a Conv convolution operation on the fused feature map of step 528 to generate a prediction consistent with the input image size, consisting of 7 channels, each representing a prediction of a jigsaw puzzle piece.
The tangram recognition model needs to be directly operated on the mobile terminal, the computing resources of the mobile terminal are effective, and the real-time recognition effect can be achieved only by requiring the network model to be simplified and efficient as much as possible. The following design is used for simplifying and optimizing the network model and accelerating the model reasoning speed.
The decoder structure and the encoder structure of a general image segmentation model are fully equivalent, which is functionally reasonable, but there is room for optimization. The encoder is used for extracting the characteristic information of the tangram image, the scale can be relatively large, the decoder performs up-sampling on the characteristics extracted by the encoder, the details of the result of the decoder are essentially finely adjusted, the scale of the decoder can be relatively reduced, the calculated amount of the whole model is reduced, the identification inference speed of the model is accelerated, the final identification result can be basically unaffected, and the inference is proved by the experimental result of the model. Step 512 comprises an encoder consisting of 6 Conv _ blocks A, and corresponding steps 522 and 526 respectively comprise a decoder consisting of 4 Deconv _ blocks C, wherein the decoder is less than the encoder by 2 blocks; step 514 comprises an encoder consisting of 6 Conv _ blocks A, and corresponding steps 520 and 524 respectively comprise a decoder consisting of 4 Deconv _ blocks C, wherein the decoder is less than the encoder by 2 blocks; step 516 comprises an encoder consisting of 6 Conv _ block A, and corresponding step 518 comprises a decoder consisting of 4 Deconv _ block C, wherein the decoder is less than the encoder by 2 blocks;
the tangram recognition model based on the convolutional neural network needs to detect the tangram with a small target, so the design of the network model is deep, a large number of convolution operations exist, and the method optimizes the standard convolution operation, thereby achieving the effects of model compression and acceleration. The standard convolution operation is to operate all channels and image areas at the same time, the method carries out factorization on a standard convolution kernel, the standard convolution kernel is divided into two sub-convolution operations, each channel is firstly carried out with respective convolution operation to obtain a new channel feature image, and then the standard 1X1 cross-channel convolution operation is carried out. For example, if the convolution kernel of a convolution layer in the model is Kw high, Kh wide, M deep, and the number of Feature maps (Feature maps) is N, then the convolution operation can be decomposed into 1 × N × M convolution operations and Kw × Kh M convolution operations. Assuming that the input width and height of the convolutional layer are Dw and Dh, respectively, the calculated quantity of a convolution operation before factorization optimization is Kw × Kh × M × nw Dh and the parameter quantity is Kw × Kh × M. Factorization is performed using this implementation, the calculated quantity of one convolution layer is Kw × Kh × M × Dw × Dh + M × N × Dw × Dh, and the parameter quantity of one convolution layer is Kw × Kh × M + M × N. The convolution calculation and the parameter quantity after the factorization optimization are obviously reduced, and the calculation quantity and the parameter quantity are reduced by about 10 times. Compared with the accuracy rate and the recall rate of the model before and after optimization, the difference is not more than 0.5 percent.
Figure 8 shows a block diagram of the arrangement of the tangram recognition model based device according to an embodiment of the present disclosure. As shown in the drawing, the device is a recognition device based on a tangram recognition model, characterized in that the device comprises:
an image acquiring and intercepting unit 702, configured to acquire a real-time image of a camera and intercept an effective image area; a tangram recognition unit 704 for inputting the clipped image as a tangram recognition model which is a deep convolutional neural network trained for tangram recognition, the output of the tangram recognition model indicating the pixels of all tangrams contained in the input image in the image and the type of tangram corresponding to each pixel; and the tangram recognition post-processing unit 706 is used for optimizing the recognition result output by the network model, increasing the precision of edge recognition of the tangram image and keeping the optimal recognition result.
For other details of the above device, reference may be made to the description of the method above, and further description is omitted here.
Figure 9 is a block diagram illustrating a tangram recognition model arrangement 800 according to an exemplary embodiment. For example, the apparatus 800 may be a tablet, a smart phone, a smart television, a smart robot, or other smart terminal.
Referring to fig. 8, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816 and camera component 818.
The processing component 802 generally controls the overall operation of the device 800, such as with display, telephone calls, data communications, and the like
Machine operation and logging operation associated operations. Processing component 802 may include one or more processors 820 to execute instructions
So as to complete all or part of the steps of the method. In addition, the processing component 802 can include one or more modules,
interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory
Magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology,
bluetooth (BT) technology and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the device 800 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (9)

1. A method of identification based on a network model of a jigsaw puzzle, the method comprising:
intercepting an image area of a camera containing an effective tangram;
(ii) taking the intercepted image as an input to a tangram recognition model, the tangram recognition model being a deep convolutional neural network trained for tangram recognition, the output of the tangram recognition model indicating the pixels in the image of all tangrams contained in the input image and the type of tangram to which each pixel corresponds;
all of the tangram contained in the intercepted image is identified based on the output of the tangram identification model.
2. The method of claim 1, wherein:
in the seven-piece puzzle recognition model, a scheme of replacing a full connection layer in a classification network by a convolution layer and a cavity convolution layer is provided, and a corresponding decoder structure mainly based on an anti-convolution layer is added behind a final feature extraction layer, so that the network model can output a pixel-level recognition result.
3. The method of claim 1, wherein:
in the tangram recognition model, a structure of cavity convolution is provided to reduce the calculated amount and parameter amount of the recognition model, increase the receptive field and maintain the resolution of the feature mapping, so that the recognition model can accurately recognize tangrams with different scales; that is, the recognition model can be accurately recognized regardless of whether the jigsaw puzzle is located close to the camera or away from the camera.
4. The method of claim 1, wherein:
the scheme of adopting multi-scale feature mapping for up-sampling and then fusing is provided, and the accuracy of identifying the small target tangram is improved.
5. The method of claim 1, wherein:
a scheme for reducing the resolution of an input image in the initial part of a tangram recognition model is provided, so that visual redundancy is removed, the amount of calculation is reduced, and the recognition speed is increased.
6. The method of claim 1, wherein:
the method for reducing the size of the seven-piece puzzle recognition model decoder is provided, so that the network model is simplified, and the effects of reducing the calculation amount and accelerating the recognition speed are achieved.
7. The method of claim 1, wherein:
a method for factorizing standard convolution operation is provided, so that the calculated amount and parameters of a network model are reduced, and the effect of accelerating the identification speed is achieved.
8. A tangram recognition device based on a tangram recognition model, the device comprising:
an image acquisition and interception unit for acquiring a camera image and intercepting an image area containing the effective jigsaw puzzle;
a tangram recognition unit for inputting the clipped image as a tangram recognition model which is a deep convolutional neural network trained for tangram recognition, the output of the tangram recognition model indicating the pixels of all tangrams contained in the input image in the image and the type of tangram each corresponding to a pixel.
9. A tangram recognition device based on tangram recognition model, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any one of claims 1 to 8.
CN201911391896.XA 2019-12-30 2019-12-30 Method and device for recognizing tangram toy Pending CN111079698A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911391896.XA CN111079698A (en) 2019-12-30 2019-12-30 Method and device for recognizing tangram toy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911391896.XA CN111079698A (en) 2019-12-30 2019-12-30 Method and device for recognizing tangram toy

Publications (1)

Publication Number Publication Date
CN111079698A true CN111079698A (en) 2020-04-28

Family

ID=70319398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911391896.XA Pending CN111079698A (en) 2019-12-30 2019-12-30 Method and device for recognizing tangram toy

Country Status (1)

Country Link
CN (1) CN111079698A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629736A (en) * 2017-03-15 2018-10-09 三星电子株式会社 System and method for designing super-resolution depth convolutional neural networks
CN108986124A (en) * 2018-06-20 2018-12-11 天津大学 In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method
CN109726709A (en) * 2017-10-31 2019-05-07 优酷网络技术(北京)有限公司 Icon-based programming method and apparatus based on convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629736A (en) * 2017-03-15 2018-10-09 三星电子株式会社 System and method for designing super-resolution depth convolutional neural networks
CN109726709A (en) * 2017-10-31 2019-05-07 优酷网络技术(北京)有限公司 Icon-based programming method and apparatus based on convolutional neural networks
CN108986124A (en) * 2018-06-20 2018-12-11 天津大学 In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method

Similar Documents

Publication Publication Date Title
CN110348537B (en) Image processing method and device, electronic equipment and storage medium
CN110287874B (en) Target tracking method and device, electronic equipment and storage medium
CN109522910B (en) Key point detection method and device, electronic equipment and storage medium
CN111445493B (en) Image processing method and device, electronic equipment and storage medium
CN109829863B (en) Image processing method and device, electronic equipment and storage medium
CN110675409A (en) Image processing method and device, electronic equipment and storage medium
CN111753822A (en) Text recognition method and device, electronic equipment and storage medium
CN110889469A (en) Image processing method and device, electronic equipment and storage medium
US11443438B2 (en) Network module and distribution method and apparatus, electronic device, and storage medium
CN109145970B (en) Image-based question and answer processing method and device, electronic equipment and storage medium
CN110633700B (en) Video processing method and device, electronic equipment and storage medium
CN112465843A (en) Image segmentation method and device, electronic equipment and storage medium
CN111340048B (en) Image processing method and device, electronic equipment and storage medium
CN110532956B (en) Image processing method and device, electronic equipment and storage medium
CN109840917B (en) Image processing method and device and network training method and device
CN111435422B (en) Action recognition method, control method and device, electronic equipment and storage medium
CN113139471A (en) Target detection method and device, electronic equipment and storage medium
CN111414963A (en) Image processing method, device, equipment and storage medium
CN111931781A (en) Image processing method and device, electronic equipment and storage medium
CN110633715B (en) Image processing method, network training method and device and electronic equipment
CN109903252B (en) Image processing method and device, electronic equipment and storage medium
CN113660531A (en) Video processing method and device, electronic equipment and storage medium
CN113313115A (en) License plate attribute identification method and device, electronic equipment and storage medium
CN109635926B (en) Attention feature acquisition method and device for neural network and storage medium
CN111488964A (en) Image processing method and device and neural network training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200428

WD01 Invention patent application deemed withdrawn after publication