WO2022262116A1 - 基于人工智能图像识别调整电动牙刷的口腔健康管理*** - Google Patents

基于人工智能图像识别调整电动牙刷的口腔健康管理*** Download PDF

Info

Publication number
WO2022262116A1
WO2022262116A1 PCT/CN2021/114479 CN2021114479W WO2022262116A1 WO 2022262116 A1 WO2022262116 A1 WO 2022262116A1 CN 2021114479 W CN2021114479 W CN 2021114479W WO 2022262116 A1 WO2022262116 A1 WO 2022262116A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
oral
module
electric toothbrush
server
Prior art date
Application number
PCT/CN2021/114479
Other languages
English (en)
French (fr)
Inventor
熊丹
Original Assignee
熊丹
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 熊丹 filed Critical 熊丹
Publication of WO2022262116A1 publication Critical patent/WO2022262116A1/zh
Priority to US18/497,714 priority Critical patent/US20240065429A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A46BRUSHWARE
    • A46BBRUSHES
    • A46B15/00Other brushes; Brushes with additional arrangements
    • A46B15/0002Arrangements for enhancing monitoring or controlling the brushing process
    • A46B15/0004Arrangements for enhancing monitoring or controlling the brushing process with a controlling means
    • A46B15/0006Arrangements for enhancing monitoring or controlling the brushing process with a controlling means with a controlling brush technique device, e.g. stroke movement measuring device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C17/00Devices for cleaning, polishing, rinsing or drying teeth, teeth cavities or prostheses; Saliva removers; Dental appliances for receiving spittle
    • A61C17/16Power-driven cleaning or polishing devices
    • A61C17/22Power-driven cleaning or polishing devices with brushes, cushions, cups, or the like
    • A61C17/32Power-driven cleaning or polishing devices with brushes, cushions, cups, or the like reciprocating or oscillating
    • A61C17/34Power-driven cleaning or polishing devices with brushes, cushions, cups, or the like reciprocating or oscillating driven by electric motor
    • AHUMAN NECESSITIES
    • A46BRUSHWARE
    • A46BBRUSHES
    • A46B15/00Other brushes; Brushes with additional arrangements
    • A46B15/0002Arrangements for enhancing monitoring or controlling the brushing process
    • A46B15/0004Arrangements for enhancing monitoring or controlling the brushing process with a controlling means
    • A46B15/0008Arrangements for enhancing monitoring or controlling the brushing process with a controlling means with means for controlling duration, e.g. time of brushing
    • AHUMAN NECESSITIES
    • A46BRUSHWARE
    • A46BBRUSHES
    • A46B15/00Other brushes; Brushes with additional arrangements
    • A46B15/0002Arrangements for enhancing monitoring or controlling the brushing process
    • A46B15/0038Arrangements for enhancing monitoring or controlling the brushing process with signalling means
    • A46B15/004Arrangements for enhancing monitoring or controlling the brushing process with signalling means with an acoustic signalling means, e.g. noise
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A46BRUSHWARE
    • A46BBRUSHES
    • A46B2200/00Brushes characterized by their functions, uses or applications
    • A46B2200/10For human or animal care
    • A46B2200/1066Toothbrush for cleaning the teeth or dentures
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • the present disclosure relates to the technical field of electric toothbrushes, in particular to an oral health management system for adjusting electric toothbrushes based on artificial intelligence image recognition.
  • Electric toothbrushes currently on the market cannot collect and analyze the oral health of users, nor can they provide targeted oral cleaning services for users according to the oral health of users, and cannot achieve better oral cleaning effects.
  • the present disclosure proposes an oral health management system for adjusting electric toothbrushes based on artificial intelligence image recognition.
  • an oral health management system that adjusts electric toothbrushes based on artificial intelligence image recognition is provided, including intelligent electric toothbrushes and servers,
  • smart electric toothbrushes include,
  • An image acquisition module configured to acquire oral cavity images
  • the image judging module is used to judge whether the collected oral image is a valid oral image. If the oral image is valid, it will be sent to the server by the first communication module for identification. If the oral image is invalid, the reason for the invalid image will be sent to the host. control module;
  • the first communication module is used to send valid oral images to the server, and receive the identification results or tooth cleaning parameters sent back by the server, the identification results include at least the position information of the teeth, information on whether there is caries and/or dental calculus in the teeth, and Caries and/or calculus severity grading information, tooth cleaning parameters including brushing duration and/or vibration frequency;
  • the positioning module is used to obtain the position information of the tooth being cleaned and send it to the main control module;
  • the main control module is used to select the corresponding tooth cleaning parameters according to the position information of the teeth being cleaned, convert them into control signals and send them to the motor drive module in real time, and when receiving the reason why the image sent by the image judging module is invalid, according to the invalidity of the image
  • the reason for selecting the corresponding voice data in the voice database
  • the motor drive module is used to connect the motor and drive the motor to vibrate according to the control signal of the main control module;
  • the voice playback module is used to play voice according to the voice data selected by the main control module;
  • the server includes,
  • the recognition module is used to recognize the received oral image and generate a recognition result.
  • the recognition method includes obtaining the oral image, determining the position information of caries and/or dental calculus through a target detection algorithm, and determining caries and/or dental calculus through a convolutional neural network. Calculus severity grading information and generation of identification results; and
  • the second communication module is used to receive the oral image sent by the smart electric toothbrush, and send the recognition result or tooth cleaning parameters to the smart electric toothbrush;
  • It also includes a parameter determination module, which is used to determine tooth cleaning parameters corresponding to different oral regions according to the recognition results, and the parameter determination module is set on the intelligent electric toothbrush or the server.
  • a control method for an intelligent electric toothbrush that adjusts an electric toothbrush based on artificial intelligence image recognition is provided, which is applied to any of the above-mentioned oral health management systems that adjust an electric toothbrush based on artificial intelligence image recognition, including the following steps,
  • Intelligent electric toothbrush collects oral images
  • the smart electric toothbrush judges whether the collected oral image is valid. If the oral image is valid, it will upload the effective oral image to the server. If the oral image is invalid, it will issue a corresponding voice prompt;
  • the server recognizes the received oral image and generates a recognition result, the recognition result at least includes tooth position information, information on whether the tooth has caries and/or calculus, and information on the severity classification of caries and/or calculus;
  • the server determines the tooth cleaning parameters corresponding to different oral regions according to the recognition results and sends them to the smart electric toothbrush, or the server sends the recognition results to the smart electric toothbrush, and the smart electric toothbrush determines the tooth cleaning parameters corresponding to different oral regions according to the recognition results, tooth cleaning parameters including brushing duration and/or vibration frequency;
  • the smart electric toothbrush obtains the location information of the teeth currently being cleaned
  • the smart electric toothbrush selects the corresponding tooth cleaning parameters according to the position information of the teeth currently being cleaned;
  • the smart electric toothbrush controls the motor vibration according to the current tooth cleaning parameters
  • the server recognizes the received oral image, and generates a recognition result specifically including,
  • a smart electric toothbrush is provided, which is applied to any of the above-mentioned oral health management systems based on artificial intelligence image recognition to adjust the electric toothbrush, including,
  • An image acquisition module configured to acquire oral cavity images
  • the image judging module is used to judge whether the collected oral image is a valid oral image. If the oral image is valid, it will be sent to the server by the first communication module for identification. If the oral image is invalid, the reason for the invalid image will be sent to the host. control module;
  • the first communication module is used to send valid oral images to the server, and receive the identification results or tooth cleaning parameters sent back by the server, the identification results include at least the position information of the teeth, information on whether there is caries and/or dental calculus in the teeth, and Caries and/or calculus severity grading information, tooth cleaning parameters including brushing duration and/or vibration frequency;
  • the positioning module is used to obtain the position information of the tooth being cleaned and send it to the main control module;
  • the main control module is used to receive the reason for the invalid image, select the corresponding voice data in the voice database according to the reason for the invalid image, and according to the position information of the teeth being cleaned, select the corresponding tooth cleaning parameters and convert them into control signals and send them to the Motor drive module;
  • the motor drive module is used to connect the motor and drive the motor to vibrate according to the control signal of the main control module;
  • the voice playing module is used for playing voice according to the voice data selected by the main control module.
  • the beneficial effect of the present disclosure is that by acquiring the user's oral image, and identifying and analyzing the oral image through the target detection algorithm and the convolutional neural network, the accurate user's dental caries, dental calculus and other related oral health information can be obtained, and according to This controls the smart electric toothbrush, provides users with targeted oral cleaning services, improves the cleaning effect of the smart electric toothbrush, and improves user experience; Clear, overexposed or dark pictures, and imaging angle problems have adverse effects on the recognition results, which improves the accuracy of the recognition results; through the analysis results of invalid pictures by the image judgment module, the voice playback module is controlled to perform relevant voice data Playing and prompting the user in the form of voice, it is convenient for the user to obtain more intuitive guidance in order to obtain effective oral images. On the one hand, the accuracy of the recognition result is further improved, and on the other hand, the user experience is improved.
  • Fig. 1 is a schematic structural diagram of an oral health management system for adjusting an electric toothbrush based on artificial intelligence image recognition provided by an embodiment of the present disclosure.
  • Fig. 2 is a flowchart of a control method for an intelligent electric toothbrush provided by another embodiment of the present disclosure.
  • Fig. 3 is a flow chart of step S13 of the method for controlling an intelligent electric toothbrush provided by an embodiment of the present disclosure.
  • Fig. 4 is a flow chart of step S132 of the control method for an intelligent electric toothbrush provided by an embodiment of the present disclosure.
  • Fig. 5 is a flow chart of step S133 in the control method for an intelligent electric toothbrush provided by an embodiment of the present disclosure.
  • Fig. 6 is a schematic structural diagram of an intelligent electric toothbrush according to another embodiment of the present disclosure.
  • FIG. 1 of the specification shows an oral health management system for adjusting an electric toothbrush based on artificial intelligence image recognition provided by an embodiment of the present application.
  • the system includes an electric toothbrush 1 and a server 2 .
  • the smart electric toothbrush 1 includes:
  • An image acquisition module 101 configured to acquire oral images
  • the image judging module 102 is used to judge whether the collected oral image is a valid oral image. If the oral image is valid, the first communication module 103 will send it to the server 2 for identification. If the oral image is invalid, the reason for the invalid image Send to the main control module 105;
  • the first communication module 103 is used to send valid oral images to the server 2, and receive the recognition result sent back by the server 2, the recognition result includes the position information of the teeth, information on whether there is caries and calculus in the teeth, caries and calculus severity rating information;
  • the positioning module 104 is used to obtain the position information of the tooth being cleaned and send it to the main control module 105;
  • the main control module 105 is used to select the corresponding tooth cleaning parameters according to the position information of the teeth being cleaned, convert them into control signals and send them to the motor drive module 106 in real time, and when receiving the reason why the image sent by the image judging module 102 is invalid, Select the corresponding voice data in the voice database according to the invalid reason of the image;
  • the motor drive module 106 is used to connect the motor, and drive the motor to vibrate according to the control signal of the main control module 105;
  • the voice playing module 107 is used for playing voice according to the voice data selected by the main control module 105;
  • the parameter determination module 108 is used to determine tooth cleaning parameters corresponding to different oral cavity areas according to the recognition results sent back by the server 2, and the tooth cleaning parameters include brushing time and vibration frequency;
  • Server 2 includes,
  • the identification module 201 is used to identify the received oral image and generate an identification result.
  • the identification method includes acquiring the oral image, determining the position information of caries and dental calculus through a target detection algorithm, and determining the position information of dental caries and dental calculus through a convolutional neural network. Severity rating information and generation of identification results;
  • the second communication module 202 is used to receive the effective oral image sent by the smart electric toothbrush, and send the recognition result to the smart electric toothbrush.
  • the beneficial effect of the present disclosure is that by acquiring the user's oral image, and identifying and analyzing the oral image through the target detection algorithm and the convolutional neural network, the accurate user's dental caries, dental calculus and other related oral health information can be obtained, and according to This controls the smart electric toothbrush, provides users with targeted oral cleaning services, improves the cleaning effect of the smart electric toothbrush, and improves user experience; Clear, overexposed or dark pictures, and imaging angle problems have adverse effects on the recognition results, which improves the accuracy of the recognition results; through the analysis results of invalid pictures by the image judgment module, the voice playback module is controlled to perform relevant voice data Playing and prompting the user in the form of voice, it is convenient for the user to obtain more intuitive guidance in order to obtain effective oral images. On the one hand, the accuracy of the recognition result is further improved, and on the other hand, the user experience is improved.
  • the image acquisition module 101 may be a miniature wide-angle camera, which is arranged on the brush head of the intelligent electric toothbrush.
  • a relatively complete oral image can be obtained without professional dental imaging equipment.
  • the lens of the image acquisition module 101 may be made of crystal glass. Therefore, since the crystal glass has an excellent anti-fog performance, it is avoided that the obtained oral cavity image is not clear due to the presence of water mist in the human oral cavity, and the accuracy of the recognition result is further improved.
  • a supplementary light module is arranged around the image acquisition module 101 to ensure sufficient light when shooting, thereby improving the clarity of the oral image, avoiding the oral image from being too dark, and ensuring the acquisition effect of the oral image , to further improve the accuracy of the recognition results.
  • the image judging module 102 may determine whether the oral image is valid, including judging whether the oral image is clear, whether the oral image is overexposed or too dark, and whether the imaging angle of the oral image is appropriate. Specifically, the relevant parameter thresholds of the sharpness, exposure and imaging angle of the oral cavity picture are preset, and the valid conditions of the oral cavity image are judged sequentially according to the preset parameter thresholds. If the three conditions are suitable, then it is judged that the oral cavity image is valid and sent to the server for identification; if the oral cavity image does not meet at least one of the above three determination conditions, it is determined that the oral cavity image is invalid, and the image determination module 102 sends the invalid reason to to the main control module 105. When there is more than one reason for the invalidity of the oral cavity picture, the image judging module 102 sends all invalid reasons of the oral cavity picture to the main control module 105 .
  • the picture acquired by the image acquisition module 101 is a picture in JPG format.
  • the image judging module 102 whether the oral cavity image is clear can be judged by whether the pixel of the oral cavity image satisfies 720*720; whether the oral cavity image is overexposed or too dark can be judged by whether the brightness of the oral cavity image meets 200cd/square meter; Whether the imaging angle is appropriate can be judged by whether the imaging angle is greater than 50° and less than or equal to 90°, wherein the benchmark of the imaging angle is the occlusal surface of the teeth. Therefore, by setting the relevant specification parameters of the oral image, the judgment of the effectiveness of the intelligent electric toothbrush on the oral image can be satisfied, and the basis for accurate and effective identification of the subsequent oral image can be provided.
  • the main control module 105 selects the corresponding voice data in the voice database, which may include an adjustment prompt for shooting time or a refocus adjustment prompt;
  • the main control module 105 selects the corresponding voice data in the voice database, which may include an adjustment prompt for the size of the user's mouth opening and an adjustment prompt for turning on or off the supplementary light module;
  • the main control module 105 selects the corresponding voice data in the voice database, it may include an adjustment prompt for the user's photographing posture, such as "please extend the toothbrush head inward" and so on.
  • the reason for the invalid oral image is corresponding to the voice data of the adjustment prompt, and the user is given targeted reminders to assist the user to take an effective oral image.
  • the quality of the oral image is improved, and the recognition accuracy is further improved.
  • prompting users through voice is more direct and effective and improves user experience.
  • the main control module 105 can directly control the supplementary light module to switch on and off or directly control the image acquisition module 101 to adjust focus. As a result, the operation difficulty of the user is reduced, the use is more convenient, and the user experience is improved.
  • the corresponding waiting prompt voice can be selected in the voice database , such as "autofocus, please maintain this posture" and so on.
  • the corresponding prompting voice for taking pictures can be selected in the voice database, such as "refocused, please take a picture” and so on.
  • the positioning module 104 may include a gyroscope, and the gyroscope is used to obtain the position, movement track and acceleration parameters of the smart electric toothbrush.
  • the gyroscope is used to obtain the position, movement track and acceleration parameters of the smart electric toothbrush.
  • the drive module drives the motor to vibrate at the frequency corresponding to this mode; it provides corresponding oral cleaning services for different tooth conditions and improves user experience.
  • the smart electric toothbrush 1 also includes a key operation module 109, which is used to receive the user's key operation signal and send it to the main control module; at this time, the main control module 105 can also be used to receive the key operation The signal is converted into a corresponding control signal and sent to the image acquisition module, motor drive module and voice playback module.
  • a key operation module 109 which is used to receive the user's key operation signal and send it to the main control module; at this time, the main control module 105 can also be used to receive the key operation The signal is converted into a corresponding control signal and sent to the image acquisition module, motor drive module and voice playback module.
  • the smart electric toothbrush 1 may include a first button and a second button, and the button operation module 109 receives the user's operation on the first button and the second button and sends it to the main control module 105, and the The main control module 105 controls the actions of the image acquisition module 101 , the motor drive module 106 and the voice playback module 107 .
  • the voice playback module 107 can play the corresponding voice prompt "Hello, please take a picture", and short press the second button.
  • the voice will play Module 107 can play corresponding photographing sound, and image acquisition module 101 carries out oral cavity image collection; If image judging module 102 judges that this picture is invalid, then voice playing module 107 plays relevant voice prompt " please adjust electric toothbrush posture, and take a picture " etc.
  • the image judging module 102 judges that the picture is valid and uploads it through the first communication module 103, if the upload is successful, an "upload successful" prompt will appear, and if the upload fails for more than 5 seconds, an "upload failed” prompt will appear.
  • the motor drive module 106 controls the motor to start vibrating.
  • the motor can be set to pause once every predetermined time to remind the user to switch parts.
  • the main control module 5 controls the motor to change the vibration mode by controlling the control command sent to the motor drive module 106 .
  • the vibration modes of the motor can be divided into three modes, specifically including a first vibration mode, a second vibration mode and a third vibration mode, corresponding to the three vibration frequencies respectively.
  • the vibration frequency of the first vibration mode can be set to be higher than the vibration frequency of the second vibration mode, and the vibration frequency of the second vibration mode is higher than the vibration frequency of the third vibration mode.
  • the smart electric toothbrush 1 also includes a timing module, the timing module is connected to the main control module 105, and the main control module 105 sets the timing module according to the brushing time in the teeth cleaning parameters, when the timing module When the calculated time reaches the preset brushing time, the main control module 105 stops the vibration of the motor by controlling the motor drive module 106, so as to achieve the effect of automatically controlling the brushing time.
  • the intelligent electric toothbrush 1 also includes a default frequency setting module, which is used to receive and memorize the adjustment instructions of the vibration frequency or vibration pattern, and when the same vibration frequency or vibration pattern adjustment instructions appear repeatedly, the The vibration frequency or the vibration mode corresponding to the vibration frequency is set as a default vibration frequency or a default vibration mode.
  • the adjustment instruction can be issued by the user by operating the first key or the second key. In this way, the vibration frequency or vibration mode is automatically adjusted according to the user's usage habits, so as to improve the user's experience.
  • the voice content played by the voice playing module 107 may also include one or more of oral health reminders and teeth brushing instructions.
  • the parameter determination module 108 can be set on the server, and is used to determine the tooth cleaning parameters corresponding to different oral regions according to the identification results.
  • the second communication module 202 sends the tooth cleaning parameters, and the corresponding , the first communication module 103 receives the tooth cleaning parameters sent back by the server. Therefore, by setting the parameter determination module 108 on the server, on the one hand, the server is not limited by the specification of the smart electric toothbrush, so the computing speed is faster; on the other hand, the data storage pressure of the smart electric toothbrush is reduced.
  • the recognition result generated by the recognition module 201 may also include only the position information of the teeth, information on whether there is caries in the teeth, and information on the severity grading of caries, or only the position information of the teeth, whether there are caries in the teeth, Calculus information and calculus severity grading information.
  • the smart electric toothbrush can be targeted to provide recognition results, and the smart electric toothbrush can adjust the oral cleaning service to further improve the applicability of the oral health system.
  • determining the position information of dental caries and/or dental calculus through a target detection algorithm includes:
  • the evaluation content includes whether there is a target object in the frame and when there is a target object in the frame, the category of the target object;
  • determining the severity grading information of dental caries and/or dental calculus through a convolutional neural network includes:
  • the oral cavity image is segmented to obtain a tooth picture with a target object, and the target object is dental caries and/or dental calculus;
  • each level corresponds to the severity of different target objects
  • Output classification confidence the higher the classification confidence, the higher the accuracy of the category evaluation of the corresponding target object.
  • the server 2 also includes a tooth information acquisition module and an oral mucosa information acquisition module. Determine whether the user's oral mucosa exists.
  • the number of teeth, tooth shape and oral mucosa of the user can be used as the basis for determining tooth cleaning parameters; the user's oral health status can be grasped more comprehensively, and user experience can be improved.
  • the tooth information acquisition module can judge the user's age group according to the number and shape of the user's teeth.
  • the parameter determination module 108 arranged in the intelligent electric toothbrush 1 or the server 2 when determining the teeth cleaning parameters, can be used for children, the elderly and the number of teeth according to the user's age group and the number of teeth. For users, adjust the brushing time in the teeth cleaning parameters to protect the gum health in toothless areas by reducing the brushing time.
  • the server 2 also includes an oral health report generating module, configured to generate an oral health report according to the recognition result; at this time, the second communication module may be configured to send the oral health report to a designated terminal.
  • the specified terminal is a mobile terminal or PC terminal associated with the user.
  • the oral health report may include the user's oral problem type, grading information and related oral images.
  • the user's oral image and the analysis results of the server long-term and systematic tracking and analysis of the user's oral health status are carried out, and the user's oral problems and development trends are obtained, so as to prevent oral diseases or enable users to treat related oral diseases in time, and improve user experience. .
  • the server 2 can compare multiple oral health reports, generate an oral health trend report, and send the oral health trend report to a designated terminal. In this way, comparison information of the oral health status is provided for the user, so that the user has a more intuitive understanding of the oral health status.
  • FIG. 2 of the specification shows the control method of the intelligent electric toothbrush based on artificial intelligence image recognition and adjustment of electric toothbrush provided by the embodiment of the present application, which is applied to any of the above-mentioned embodiments to adjust the oral health of the electric toothbrush based on artificial intelligence image recognition management system, including the following steps,
  • S11 The smart electric toothbrush collects oral images
  • S12 The smart electric toothbrush judges whether the collected oral image is valid, if the oral image is valid, uploads the valid oral image to the server, and if the oral image is invalid, sends out a corresponding voice prompt;
  • the server recognizes the received oral image, and generates a recognition result, the recognition result at least includes tooth position information, information on whether the tooth has caries and/or calculus, and severity grading information of caries and/or calculus;
  • the server determines the tooth cleaning parameters corresponding to different oral regions according to the recognition results and sends them to the smart electric toothbrush, or the server sends the recognition results to the smart electric toothbrush, and the smart electric toothbrush determines the tooth cleaning parameters corresponding to different oral regions according to the recognition results.
  • Cleaning parameters include brushing duration and/or vibration frequency;
  • S15 The smart electric toothbrush obtains the position information of the teeth currently being cleaned
  • S16 The smart electric toothbrush selects corresponding tooth cleaning parameters according to the position information of the teeth currently being cleaned
  • S17 The smart electric toothbrush controls the vibration of the motor according to the current tooth cleaning parameters.
  • the intelligent electric toothbrush and the server may communicate via a wireless network or a Bluetooth network.
  • the smart electric toothbrush collects oral images through a miniature wide-angle camera, which is set on the smart electric toothbrush.
  • the lens of miniature wide-angle camera can be made of crystal glass.
  • step S11 collecting oral images by the smart electric toothbrush may include turning on the supplementary light module before collecting.
  • the intelligent electric toothbrush determines whether the collected oral image is valid may include judging whether the oral image is clear, whether the oral image is overexposed or too dark, and whether the imaging angle of the oral image is appropriate .
  • the relevant parameter thresholds of the sharpness, exposure and imaging angle of the oral cavity picture are preset, and the valid conditions of the oral cavity image are judged sequentially according to the preset parameter thresholds. If the three conditions are satisfied, the oral image is judged to be valid and sent to the server for identification; if the oral image does not meet at least one of the above three judging conditions, the oral image is judged to be invalid.
  • step S13 may specifically include the following steps:
  • S132 Determine the position information of dental caries and/or dental calculus through a target detection algorithm
  • S133 Determine the severity grading information of dental caries and/or dental calculus through a convolutional neural network
  • step S132 may specifically include:
  • S1321 Segment the received oral image to obtain S ⁇ S grids (blocks);
  • S1323 Evaluate each frame, the evaluation content includes whether there is a target object in the frame and when there is a target object in the frame, the category of the target object;
  • the category of the target object may be dental caries and/or dental calculus.
  • S1324 Delete the frame where the target object does not exist, and determine the position of the frame with the target object, wherein the position of the frame includes four values, the center point x value (b x ) and y value (b y ), and the width of the frame ( b w ) and height (b h ).
  • the target detection algorithm in step S132 may use the YOLOv5 algorithm.
  • YOLOv5 uses Mosaic data enhancement to stitch some images together to generate new images and generate a larger number of images.
  • YOLOv5 can adaptively make the black border after the image is scaled and filled to the minimum.
  • YOLOv5 predicts b x , b y , b w and b h by predicting t x , ty , t w and t h , and the relationship is as follows:
  • t x , t y , t w and t h are predicted values
  • c x and cy are the coordinates of the upper left corner of the target object frame relative to the entire oral image
  • p w and p h are the coordinates of the target object frame width and height.
  • the GIOU-loss loss function is used to optimize the model parameters, the formula is as follows:
  • a and B are the frame of the target object and the frame of the real object, respectively, and That is, the intersection ratio of A and B is the union, and C is the smallest circumscribed rectangle of A and B;
  • the overall loss (Loss) function can be written as:
  • b x , b y , b w and b h are predicted values, with are labeled values, C i and are the confidence levels of the predicted value and labeled value, respectively, is a control function, indicating that there is an object in the jth prediction frame of grid i.
  • C i is the confidence levels of the predicted value and labeled value, respectively.
  • ⁇ coord and ⁇ noobj are two hyperparameters introduced, in order to make the weight of the object frame containing the detection target larger.
  • the frames of some overlapping and repeated target objects can be deleted through a non-maximum suppression (Non-Maximum Suppression, NMS) operation.
  • NMS Non-Maximum Suppression
  • step S133 may specifically include:
  • S1331 Based on the position information of caries and/or calculus determined by the target detection algorithm, segment the oral image to obtain a tooth picture with a target object, and the target object is caries and/or calculus;
  • S1332 Use the convolutional neural network to classify the teeth pictures, each level corresponds to the severity of different target objects;
  • the convolutional neural network is used to classify the severity of the target object, which can be divided into three categories, 0 representing mild, 1 representing moderate, and 2 representing severe. Thus, the judgment of the severity of dental caries and/or dental calculus is realized.
  • S1333 Output classification confidence. The higher the classification confidence, the higher the accuracy of category evaluation of the corresponding target object.
  • the convolutional neural network is used to output the classification confidence of each tooth image containing the target object in step S1331.
  • the higher the classification confidence the higher the accuracy of the category evaluation result of the target object by the convolutional neural network.
  • the classification confidence can be used to filter the recognition results.
  • step S134 specifically includes, comprehensively generating a recognition result according to the output results of step S132 and step S133.
  • the identification result may also include only the position information of the teeth, the information of whether there is caries in the teeth, and the grading information of the severity of caries, or only the position information of the teeth, whether there is dental calculus in the teeth information and information on severity grading of calculus.
  • step S14 it may also include that the server acquires information about the number of teeth of the user, the shape of the teeth, and whether the oral mucosa of the user exists according to the oral cavity image.
  • step S14 it may also include that the server judges the age group of the user according to the number and shape of the teeth of the user.
  • step S14 when determining the tooth cleaning parameters, the brushing time in the tooth cleaning parameters can be adjusted for children, the elderly and users with a small number of teeth according to the user's age group and the number of teeth. Protects gum health in toothless areas by reducing brushing time.
  • the intelligent electric toothbrush can obtain the position information of the tooth currently being cleaned through a gyroscope, and the gyroscope is used to obtain the position, movement trajectory and acceleration parameters of the intelligent electric toothbrush, according to the intelligent electric toothbrush
  • the attitude parameters during the use of the toothbrush determine the position information of the teeth being cleaned.
  • the method further includes that the server can generate an oral health report according to the identification result and send the oral health report to the designated terminal.
  • the oral health report may include the user's oral problem type, grading information and related oral images.
  • the method further includes that the server can generate an oral health report according to the identification result and send the oral health report to the designated terminal.
  • the oral health report may include the user's oral problem type, grading information and related oral images.
  • the method further includes that the server can compare multiple oral health reports, generate an oral health trend report, and send the oral health trend report to a designated terminal.
  • the method further includes that the smart toothbrush performs one or more of a photographing operation, a tooth brushing operation, and a voice playback according to the user's button operation.
  • the beneficial effect of the present disclosure is that by acquiring the user's oral image, and identifying and analyzing the oral image through the target detection algorithm and the convolutional neural network, the accurate user's dental caries, dental calculus and other related oral health information can be obtained, and according to This controls the smart electric toothbrush, provides users with targeted oral cleaning services, improves the cleaning effect of the smart electric toothbrush, and improves user experience; , pictures overexposed or too dark, and imaging angle problems have adverse effects on the recognition results, further improving the accuracy of the recognition results; through the analysis results of invalid pictures by the image judgment module, the voice playback module is controlled to perform relevant voice data Playing and prompting the user in the form of voice, it is convenient for the user to obtain more intuitive guidance in order to obtain effective oral images. On the one hand, the accuracy of the recognition result is further improved, and on the other hand, the user experience is improved.
  • the intelligent electric toothbrush provided by an embodiment of the present application, referring to Figure 6 of the specification, is applied to any of the above-mentioned oral health management systems based on artificial intelligence image recognition and adjustment of electric toothbrushes, which may specifically include:
  • An image acquisition module 101 configured to acquire oral images
  • the image judging module 102 is used to judge whether the collected oral image is a valid oral image. If the oral image is valid, it will be sent to the server by the first communication module for identification. If the oral image is invalid, the reason for the invalid image will be sent to main control module;
  • the first communication module 103 is used to send valid oral images to the server, and receive the recognition result or tooth cleaning parameters sent back by the server, the recognition result includes at least the position information of the teeth, information on whether the teeth have caries and/or dental calculus and information on severity grading of caries and/or calculus, tooth cleaning parameters including brushing duration and/or vibration frequency;
  • the positioning module 104 is used to obtain the position information of the tooth being cleaned and send it to the main control module;
  • the main control module 105 is used to receive the reason of invalid image, select the corresponding voice data in the voice database according to the reason of invalid image, and according to the position information of the teeth being cleaned, select the corresponding tooth cleaning parameters and convert them into control signals for real-time transmission to the motor drive module;
  • the motor drive module 106 is used to connect the motor, and drive the motor to vibrate according to the control signal of the main control module;
  • the voice playing module 107 is used for playing voice according to the voice data selected by the main control module.
  • the key operation module is used to receive the user's key operation signal and send it to the main control module; at this time, the main control module can be used to receive the key operation signal and convert it into a corresponding control signal and send it to the image acquisition module, motor drive module and voice Play module.
  • the smart electric toothbrush provided by this embodiment belongs to the same idea as the system embodiment, and its specific implementation process and related beneficial effects can be found in the system embodiment in detail, and will not be repeated here.
  • the beneficial effect of the present disclosure is that the intelligent electric toothbrush acquires the user's oral cavity image, and receives the identification result or tooth cleaning parameters sent back by the server, and controls the intelligent electric toothbrush accordingly, providing targeted oral cleaning services for the user, improving Improve the cleaning effect of the smart electric toothbrush and improve the user experience; the smart electric toothbrush judges the effectiveness of the oral image, avoiding the bad recognition results due to unclear oral images, overexposed or dark images, and imaging angle problems impact, further improving the accuracy of the recognition results; through the analysis results of invalid pictures by the image judgment module, the voice playback module is controlled to play relevant voice data, and the user is prompted in the form of voice, so that the user can obtain more intuitive guidance , in order to obtain effective oral images, on the one hand, to further improve the accuracy of recognition results, and on the other hand, to enhance user experience.
  • each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments.
  • the description is relatively simple, and for relevant parts, refer to the part of the description of the method embodiments.
  • the storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Acoustics & Sound (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Dentistry (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Brushes (AREA)

Abstract

基于人工智能图像识别调整电动牙刷的口腔健康管理***,包括:智能电动牙刷(1)和服务器(2),其中,智能电动牙刷(1)包括图像获取模块(101)、图像判断模块(102)、第一通讯模块(103)、定位模块(104)、主控模块(105)、电机驱动模块(106)和语音播放模块(107),服务器(2)包括识别模块(201)和第二通讯模块(202),还包括参数确定模块(108),用于根据识别结果确定不同口腔区域对应的牙齿清洁参数,参数确定模块(108)设置在智能电动牙刷(1)或服务器(2)上。通过获取用户的口腔图像并进行识别分析,根据识别分析结果选择牙齿清洗参数,为用户提供针对性的口腔清洗服务,提高了智能电动牙刷的清洗效果,提高用户体验。

Description

基于人工智能图像识别调整电动牙刷的口腔健康管理*** 技术领域
本公开涉及电动牙刷技术领域,尤其涉及基于人工智能图像识别调整电动牙刷的口腔健康管理***。
背景技术
目前,随着生活质量越来越高,人们对口腔防护的要求越来越高,具有较好清洁效果的电动牙刷应运而生。
目前市场上的电动牙刷,无法对用户的口腔健康情况进行采集分析,同样无法根据用户的口腔健康情况为用户提供针对性的口腔清洁服务,无法达到更良好的口腔清洁效果。
发明内容
本公开针对上述问题,提出基于人工智能图像识别调整电动牙刷的口腔健康管理***。
为了解决上述技术问题中的至少一个,本公开提出如下技术方案:
第一方面,提供了基于人工智能图像识别调整电动牙刷的口腔健康管理***,包括智能电动牙刷和服务器,
其中,智能电动牙刷包括,
图像获取模块,用于采集口腔图像;
图像判断模块,用于判断采集到的口腔图像是否为有效的口腔图像,若口腔图像有效,则由第一通讯模块发送至服务器进行识别,若口腔图像无效,则将图像无效的原因发送至主控模块;
第一通讯模块,用于将有效的口腔图像发送至服务器,并接收服务器发回的识别结果或牙齿清洁参数,识别结果至少包括牙齿的位置信息、牙齿是否存在龋齿和/或牙结石的信息以及龋齿和/或牙结石的严重程度分级信息,牙齿清洁参数包括刷牙时长和/或振动频率;
定位模块,用于获取正在清洁的牙齿的位置信息并发送至主控模块;
主控模块,用于根据正在清洁的牙齿的位置信息,选择对应的牙齿清 洁参数转化成控制信号实时发送至电机驱动模块,并在接收到图像判断模块发送的图像无效的原因时,根据图像无效的原因在语音数据库中选择相应的语音数据;
电机驱动模块,用于连接电机,并根据主控模块的控制信号驱动电机振动;和
语音播放模块,用于根据主控模块选择的语音数据,进行语音播放;
服务器包括,
识别模块,用于对接收到的口腔图像进行识别,生成识别结果,识别方法包括获取口腔图像、通过目标检测算法确定龋齿和/或牙结石的位置信息、通过卷积神经网络确定龋齿和/或牙结石的严重程度分级信息和生成识别结果;和
第二通讯模块,用于接收智能电动牙刷发出的口腔图像,并向智能电动牙刷发送识别结果或牙齿清洁参数;
还包括参数确定模块,用于根据识别结果确定不同口腔区域对应的牙齿清洁参数,参数确定模块设置在智能电动牙刷或服务器上。
第二方面,提供了基于人工智能图像识别调整电动牙刷的智能电动牙刷的控制方法,应用于上述任一基于人工智能图像识别调整电动牙刷的口腔健康管理***,包括以下步骤,
智能电动牙刷采集口腔图像;
智能电动牙刷判断采集到的口腔图像是否有效,若口腔图像有效,则将有效的口腔图像上传至服务器,若口腔图像无效,则发出相应语音提示;
服务器对接收到的口腔图像进行识别,生成识别结果,识别结果至少包括牙齿的位置信息、牙齿是否存在龋齿和/或牙结石的信息以及龋齿和/或牙结石的严重程度分级信息;
服务器根据识别结果确定不同口腔区域对应的牙齿清洁参数并发送至智能电动牙刷,或服务器将识别结果发送至智能电动牙刷,智能电动牙刷根据识别结果确定不同口腔区域对应的牙齿清洁参数,牙齿清洁参数包括刷牙时长和/或振动频率;
智能电动牙刷获取当前正在清洁的牙齿的位置信息;
智能电动牙刷根据当前正在清洁的牙齿的位置信息,选择对应的牙齿清洁参数;
智能电动牙刷根据当前牙齿清洁参数控制电机振动;
其中,服务器对接收到的口腔图像进行识别,并生成识别结果具体包括,
获取口腔图像;
通过目标检测算法确定龋齿和/或牙结石的位置信息;
通过卷积神经网络确定龋齿和/或牙结石的严重程度分级信息;
生成识别结果。
第三方面,提供了智能电动牙刷,应用于上述任一基于人工智能图像识别调整电动牙刷的口腔健康管理***,包括,
图像获取模块,用于采集口腔图像;
图像判断模块,用于判断采集到的口腔图像是否为有效的口腔图像,若口腔图像有效,则由第一通讯模块发送至服务器进行识别,若口腔图像无效,则将图像无效的原因发送至主控模块;
第一通讯模块,用于将有效的口腔图像发送至服务器,并接收服务器发回的识别结果或牙齿清洁参数,识别结果至少包括牙齿的位置信息、牙齿是否存在龋齿和/或牙结石的信息以及龋齿和/或牙结石的严重程度分级信息,牙齿清洁参数包括刷牙时长和/或振动频率;
定位模块,用于获取正在清洁的牙齿的位置信息并发送至主控模块;
主控模块,用于接收图像无效的原因,根据图像无效的原因在语音数据库中选择相应的语音数据,并根据正在清洁的牙齿的位置信息,选择对应的牙齿清洁参数转化成控制信号实时发送至电机驱动模块;
电机驱动模块,用于连接电机,并根据主控模块的控制信号驱动电机振动;
语音播放模块,用于根据主控模块选择的语音数据,进行语音播放。
本公开的有益效果为,通过获取用户的口腔图像,并通过目标检测算法和卷积神经网络对口腔图像进行识别分析,获取到精准的用户的龋齿、牙结石等相关的口腔健康信息,并据此对智能电动牙刷进行控制,为用户提供针对性的口腔清洗服务,提高了智能电动牙刷的清洗效果,提高用户体验;图像判断模块对口腔图像的有效性的先行判断,避免了由于口腔图像不清晰、图片过曝或过暗以及成像角度问题等情况对识别结果产生的不良影响,提高了识别结果的准确性;通过图像判断模块对无效图片的分析结果,控制语音播放模块进行相关的语音数据播放,以语音的形式对用户 进行提示,便于用户获取更为直观的指导,以便获取有效的口腔图像,一方面,进一步提高识别结果的准确性,另一方面,提升用户体验。
另外,在本公开技术方案中,凡未作特别说明的,均可通过采用本领域中的常规手段来实现本公开技术方案。
附图说明
为了更清楚地说明本公开具体实施方式中的技术方案,下面将对具体实施方式描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本公开一个实施例提供的基于人工智能图像识别调整电动牙刷的口腔健康管理***的结构示意图。
图2为本公开另一个实施例提供的智能电动牙刷的控制方法的流程图。
图3为本公开一个实施例提供的智能电动牙刷的控制方法的步骤S13的流程图。
图4为本公开一个实施例提供的智能电动牙刷的控制方法的步骤S132的流程图。
图5为本公开一个实施例提供的智能电动牙刷的控制方法中步骤S133的流程图。
图6为本公开另一个实施例提供智能电动牙刷的结构示意图。
具体实施方式
为了使本公开的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本公开进行进一步详细说明。应当理解,此处所描述的具体实施例是本公开一部分的实施例,而不是全部的实施例,仅用以解释本公开,并不用于限定本公开,基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
需要说明的是,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、装置或服务器不必限于清楚地列出的那些步骤或单元,而是可包括没有清 楚地列出的或对于这些过程、方法、产品或设备固有的其他步骤或单元。
实施例1:
参考说明书附图1,示出了本申请实施例提供的基于人工智能图像识别调整电动牙刷的口腔健康管理***,该***包括能电动牙刷1与服务器2。
其中,智能电动牙刷1包括:
图像获取模块101,用于采集口腔图像;
图像判断模块102,用于判断采集到的口腔图像是否为有效的口腔图像,若口腔图像有效,则由第一通讯模块103发送至服务器2进行识别,若口腔图像无效,则将图像无效的原因发送至主控模块105;
第一通讯模块103,用于将有效的口腔图像发送至服务器2,并接收服务器2发回的识别结果,识别结果包括牙齿的位置信息、牙齿是否存在龋齿和牙结石的信息以及龋齿和牙结石的严重程度分级信息;
定位模块104,用于获取正在清洁的牙齿的位置信息并发送至主控模块105;
主控模块105,用于根据正在清洁的牙齿的位置信息,选择对应的牙齿清洁参数转化成控制信号实时发送至电机驱动模块106,并在接收到图像判断模块102发送的图像无效的原因时,根据图像无效的原因在语音数据库中选择相应的语音数据;
电机驱动模块106,用于连接电机,并根据主控模块105的控制信号驱动电机振动;
语音播放模块107,用于根据主控模块105选择的语音数据,进行语音播放;和
参数确定模块108,用于根据服务器2发回的识别结果确定不同口腔区域对应的牙齿清洁参数,牙齿清洁参数包括刷牙时长和振动频率;
服务器2包括,
识别模块201,用于对接收到的口腔图像进行识别,生成识别结果,识别方法包括获取口腔图像、通过目标检测算法确定龋齿和牙结石的位置信息、通过卷积神经网络确定龋齿和牙结石的严重程度分级信息和生成识别结果;
第二通讯模块202,用于接收智能电动牙刷发出的有效的口腔图像,并向智能电动牙刷发送识别结果。
本公开的有益效果为,通过获取用户的口腔图像,并通过目标检测算法和卷积神经网络对口腔图像进行识别分析,获取到精准的用户的龋齿、牙结石等相关的口腔健康信息,并据此对智能电动牙刷进行控制,为用户提供针对性的口腔清洗服务,提高了智能电动牙刷的清洗效果,提高用户体验;图像判断模块对口腔图像的有效性的先行判断,避免了由于口腔图像不清晰、图片过曝或过暗以及成像角度问题等情况对识别结果产生的不良影响,提高了识别结果的准确性;通过图像判断模块对无效图片的分析结果,控制语音播放模块进行相关的语音数据播放,以语音的形式对用户进行提示,便于用户获取更为直观的指导,以便获取有效的口腔图像,一方面,进一步提高识别结果的准确性,另一方面,提升用户体验。
在可选的实施例中,图像获取模块101可以是微型广角摄像头,设置在智能电动牙刷的刷头上。由此,无需专业口腔医学成像设备,即可获得较完整的口腔图像。
在可选的实施例中,图像获取模块101的镜头可以由水晶玻璃制成。由此,由于水晶玻璃具有优良的防雾性能,避免了由于人口腔中存在水雾导致获取到的口腔图像不清晰,进一步提高识别结果的准确性。
在可选的实施例中,在图像获取模块101周围设置补光模块,用于保证拍摄时有足够的光照,由此提高口腔图像的清晰度,避免口腔图像过暗,保证口腔图像的采集效果,进一步提高识别结果的准确性。
在可选的实施例中,图像判断模块102对口腔图像是否有效的判断可以包括判断口腔图像是否清晰、判断口腔图像是否过曝或过暗以及判断口腔图像的成像角度是否合适。具体的,预设口腔图片清晰度、曝光度与成像角度的相关参数阈值,依据预设的参数阈值对口腔图像有效的各条件依次进行判断,若口腔图片同时满足清晰、曝光度适中与成像角度合适的是三个条件,则判断该口腔图像有效,发送至服务器进行识别;若口腔图片不满足以上三个判断条件中的至少一个,则判断该口腔图像无效,图像判断模块102将无效原因发送至主控模块105。当口腔图片无效的原因不止一个时,图像判断模块102将该口腔图片全部的无效原因发送至主控模块105。
在可选的实施例中,图像获取模块101获取到的图片为JPG格式的图片。在图像判断模块102中,口腔图像是否清晰可以通过口腔图像的像素是否满足720*720进行判断;口腔图像是否过曝或过暗可以通过口腔图像的亮度是否满足200cd/平方米进行判断;口腔图像的成像角度是否合适可 以通过成像角度大小是否在大于50°且小于等于90°的范围内进行判断,其中,成像角度的基准为牙齿咬合面。由此,通过设置口腔图片的相关规格参数,满足对智能电动牙刷对口腔图像有效性的判断,为后续对口腔图像进行准确有效的识别提供基础。
在可选的实施例中,当无效原因为口腔图像不清晰时,主控模块105在语音数据库中选择相应的语音数据可以包括拍摄时长的调整提示或重新对焦的调整提示;当无效原因为口腔图像过曝或过暗时,主控模块105在语音数据库中选择相应的语音数据可以包括用户张口大小的调整提示以及开启或关闭补光模块的调整提示;当无效原因为口腔图像成像角度不合适时,主控模块105在语音数据库中选择相应的语音数据可以包括用户拍照姿势的调整提示,例如“请将牙刷头往里伸”等等。由此,口腔图像无效的原因与调整提示的语音数据相对应,针对性地对用户做出提示,辅助用户拍摄有效的口腔图像,一方面,提高口腔图像质量,进一步提高识别准确性,另一方面,通过语音对用户进行提示,更加直接有效,提高用户体验。
在可选的实施例中,主控模块105在接收到图像判断模块102发出的图像无效原因后,可根据口腔图片无效原因的种类直接控制补光模块进行开闭或直接控制图像获取模块101调整对焦。由此,降低用户操作难度,使用更方便,提高用户体验。
在可选的实施例中,当主控模块105根据口腔图片无效原因的种类直接控制补光模块进行开闭或直接控制图像获取模块101调整对焦时,可以在语音数据库中选择相应的等待提示语音,例如“自动对焦中,请保持该姿势”等等。在补光模块的开闭动作或图像获取模块101的对焦结束后,可以在语音数据库中选择相应的拍照提示语音,例如“已重新对焦,请拍照”等等。
在可选的实施例中,定位模块104可以包括陀螺仪,陀螺仪用于获取智能电动牙刷的位置、移动轨迹以及加速参数。由此,根据智能电动牙刷使用过程中的姿态参数判断正在清洁的牙齿的位置信息,并根据服务器2发回的该位置牙齿的龋齿或牙结石的识别结果,选择相应的振动模式,通过控制电机驱动模块驱动电机以该模式对应的频率振动;针对不同的牙齿状况提供相应的口腔清洗服务,提高用户体验。
在可选的实施例中,智能电动牙刷1还包括,按键操作模块109,用于 接收用户的按键操作信号并发送至主控模块;此时的主控模块105,还可以用于接收按键操作信号并转化成相应的控制信号发送至图像获取模块、电机驱动模块和语音播放模块。
在可选的实施例中,智能电动牙刷1可以包括第一按键与第二按键,按键操作模块109接收用户对于第一按键与第二按键的操作并将其发送至主控模块105,并由主控模块105控制图像获取模块101、电机驱动模块106和语音播放模块107动作。
具体的,可以设定关机状态下,短按第一按键,进入拍照模式,此时语音播放模块107可播放相应的语音提示“您好,请拍照”,短按第二按键,此时语音播放模块107可播放相应的拍照声音,并且图像获取模块101进行口腔图像采集;若图像判断模块102判断该图片无效,则语音播放模块107播放相关的语音提示“请调整电动牙刷姿势,并拍照”等,若图像判断模块102判断该图片有效,并通过第一通讯模块103上传,若上传成功,出现“上传成功”提示音,若超过5秒未上传成功,则出现“上传失败”提示音。
在拍照模式下,短按第一按键,进入刷牙模式,此时电机驱动模块106控制电机开始振动,电机可以设定为每隔预定时长暂停一次,用于提醒用户切换部位,在刷牙模式中,短按第二按键,主控模块5通过控制发送至电机驱动模块106的控制指令,控制电机更改振动模式。
在可选的实施例中,电机的振动模式可以分为三种模式,具体包括第一振动模式、第二振动模式与第三振动模式,分别对应三种振动频率。具体的,可以设置第一振动模式振动频率大于第二振动模式振动频率,第二振动模式振动频率大于第三振动模式振动频率。
在可选的实施例中,智能电动牙刷1中还包括计时模块,计时模块与主控模块105相连接,主控模块105根据牙齿清洁参数中的刷牙时长,对计时模块进行设置,当计时模块计算的时间达到预设的刷牙时长,则主控模块105通过控制电机驱动模块106,停止电机的振动,达到自动控制刷牙时长的效果。
在可选的实施例中,智能电动牙刷1还包括默认频率设定模块,用于接收并记忆振动频率或振动模式的调整指令,当多次出现同一振动频率或振动模式的调整指令,则将该振动频率或该振动频率对应的振动模式设置为默认振动频率或默认振动模式。调整指令可以由用户通过操作第一按键 或第二按键发出。由此,根据用户使用习惯针对性地对振动频率或振动模式进行自动调整,提高用户使用体验。
在可选的实施例中,语音播放模块107播放的语音内容还可以包括口腔健康温馨提示和刷牙指引中的一种或多种。
在可选的实施例中,参数确定模块108可以设置在服务器上,用于根据识别结确定不同口腔区域对应的牙齿清洁参数,此时,第二通讯模块202发送的是牙齿清洁参数,相应的,第一通讯模块103接收的是服务器发回的牙齿清洁参数。由此,通过将参数确定模块108设置在服务器上,一方面,服务器由于不受智能电动牙刷的规格限制,运算速度更快,另一方面,降低智能电动牙刷的数据存储压力。
在可选的实施例中,识别模块201生成的识别结果也可以仅包括牙齿的位置信息、牙齿是否存在龋齿的信息以及龋齿的严重程度分级信息,或仅包括牙齿的位置信息、牙齿是否存在牙结石的信息以及牙结石的严重程度分级信息。由此,根据实际应用场景与用户人群,可针对性地为智能电动牙刷提供识别结果,并由智能电动牙刷对口腔清洁服务进行调整,进一步提高口腔健康***的适用性。
在可选的实施例中,识别模块201中,通过目标检测算法确定龋齿和/或牙结石的位置信息包括,
将接收到的口腔图像进行分割,获得S×S个网格;
在每个网格中设定多个框;
对每个框进行评估,评估内容包括,框内是否存在目标物体以及当框内存在目标物体时,目标物体的类别;
删除不存在目标物体的框,确定存在目标物体的框的位置;
在可选的实施例中,识别模块201中,通过卷积神经网络确定龋齿和/或牙结石的严重程度分级信息包括,
基于通过目标检测算法确定的龋齿和/或牙结石的位置信息,将口腔图像进行分割,得到存在目标物体的牙齿图片,目标物体是龋齿和/或牙结石;
使用卷积神经网络对牙齿图片进行分级,每个级别对应不同的目标物体的严重程度;
输出分类置信度,分类置信度越高,对应的目标物体的类别评估的准确度越高。
以上识别模块201中通过目标检测算法确定龋齿和/或牙结石的位置信 息的方法与通过卷积神经网络确定龋齿和/或牙结石的严重程度分级信息的方法具体可参考说明书实施例2。
在可选的实施例中,服务器2还包括牙齿信息获取模块和口腔黏膜信息获取模块,牙齿信息获取模块用于根据口腔图像获取用户牙齿数量与牙齿形态,口腔黏膜信息获取模块用于根据口腔图像判断用户口腔黏膜是否存在。由此,用户的牙齿数量、牙齿形态和口腔黏膜情况可以作为牙齿清洁参数的确定依据;对用户的口腔健康状况掌握更加全面,提高用户体验。
在可选的实施例中,牙齿信息获取模块可以根据用户牙齿数量与牙齿形态对用户的年龄段进行判断。
在可选的实施例中,设置在智能电动牙刷1或服务器2中的参数确定模块108,在确定牙齿清洁参数时,可以根据用户的年龄段与牙齿数量,对儿童、老人及牙齿数量较少的用户,调整牙齿清洁参数中的刷牙时长,通过减少刷牙时长保护无牙区域的牙龈健康。
在可选的实施例中,服务器2还包括,口腔健康报告生成模块,用于根据识别结果生成口腔健康报告;此时的第二通讯模块,可以用于将口腔健康报告发送至指定终端。具体的,指定终端为用户关联的移动终端或PC端。
在可选的实施例中,口腔健康报告可以包括用户口腔问题类型、分级信息及相关口腔图像。由此,根据用户口腔图像以及服务器的分析结果,对用户口腔健康状况进行长期***的跟踪与分析,得到用户口腔问题以及发展趋势,预防口腔疾病或使用户可以及时治疗相关口腔疾病,提高用户体验。
在可选的实施例中,服务器2可以将多个口腔健康报告进行比对,生成口腔健康趋势报告,并将口腔健康趋势报告发送至指定终端。由此,为用户提供口腔健康状况的比对信息,使用户对口腔健康状况有更为直观的了解。
需要说明的是,上述实施例提供的***,在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
实施例2:
参考说明书附图2,示出了本申请实施例提供的基于人工智能图像识别调整电动牙刷的智能电动牙刷的控制方法,应用于上述实施例中任一基于人工智能图像识别调整电动牙刷的口腔健康管理***,包括以下步骤,
S11:智能电动牙刷采集口腔图像;
S12:智能电动牙刷判断采集到的口腔图像是否有效,若口腔图像有效,则将有效的口腔图像上传至服务器,若口腔图像无效,则发出相应语音提示;
S13:服务器对接收到的口腔图像进行识别,生成识别结果,识别结果至少包括牙齿的位置信息、牙齿是否存在龋齿和/或牙结石的信息以及龋齿和/或牙结石的严重程度分级信息;
S14:服务器根据识别结果确定不同口腔区域对应的牙齿清洁参数并发送至智能电动牙刷,或服务器将识别结果发送至智能电动牙刷,智能电动牙刷根据识别结果确定不同口腔区域对应的牙齿清洁参数,牙齿清洁参数包括刷牙时长和/或振动频率;
S15:智能电动牙刷获取当前正在清洁的牙齿的位置信息;
S16:智能电动牙刷根据当前正在清洁的牙齿的位置信息,选择对应的牙齿清洁参数;
S17:智能电动牙刷根据当前牙齿清洁参数控制电机振动。
在可选的实施例中,智能电动牙刷与服务器可以通过无线网络或蓝牙网络进行通讯连接。
在可选的实施例中,步骤S11中,智能电动牙刷采集口腔图像可以通过微型广角摄像头,该微型广角摄像头设置在智能电动牙刷上。微型广角摄像头的镜头可以由水晶玻璃制成。
在可选的实施例中,步骤S11中,智能电动牙刷采集口腔图像可以包括在采集前打开补光模块。
在可选的实施例中,步骤S12中,智能电动牙刷判断采集到的口腔图像是否有效可以包括判断判断口腔图像是否清晰、判断口腔图像是否过曝或过暗以及判断口腔图像的成像角度是否合适。具体的,预设口腔图片清晰度、曝光度与成像角度的相关参数阈值,依据预设的参数阈值对口腔图像有效的各条件依次进行判断,若口腔图片同时满足清晰、曝光度适中与成像角度合适三个条件,则判断该口腔图像有效,发送至服务器进行识别; 若口腔图片不满足以上三个判断条件中的至少一个,则判断该口腔图像无效。
参考说明书附图3,步骤S13具体具体可以包括以下步骤:
S131:获取口腔图像;
S132:通过目标检测算法确定龋齿和/或牙结石的位置信息;
S133:通过卷积神经网络确定龋齿和/或牙结石的严重程度分级信息;
S134:生成识别结果。
在可选的实施例中,参考说明书附图4,步骤S132具体可以包括:
S1321:将接收到的口腔图像进行分割,获得S×S个网格(block);
S1322:在每个网格中设定多个框(box);
S1323:对每个框进行评估,评估内容包括,框内是否存在目标物体以及当框内存在目标物体时,目标物体的类别;
具体的,目标物体的类别可以是龋齿和/或牙结石。
S1324:删除不存在目标物体的框,确定存在目标物体的框的位置,其中,框的位置包括四个数值,中心点x值(b x)和y值(b y),以及框的宽(b w)和高(b h)。
在可选的实施例中,步骤S132中的目标检测算法可以选用YOLOv5算法。
具体的,在输入端,YOLOv5采用Mosaic数据增强,将一些图像拼接在一起生成新的图像,产生更大量的图像。在算法训练中,当输入训练集图像时,YOLOv5能够自适应的让图像缩放后填充后的黑边最少。
在确定存在目标物体的框的位置时,YOLOv5通过预测t x、t y、t w和t h来预测b x、b y、b w和b h,关系式如下所示:
b x=σ(t x)+c x
b y=σ(t y)+c y
Figure PCTCN2021114479-appb-000001
Figure PCTCN2021114479-appb-000002
式中,t x、t y、t w和t h为预测值,c x和c y为目标物体框的左上角点相对整张口腔图像的坐标值,p w和p h为目标物体框的宽和高。
在输出端,采用GIOU-loss损失函数来优化模型参数,公式如下:
Figure PCTCN2021114479-appb-000003
其中,A和B分别是目标物体的框和真实物体的框,且
Figure PCTCN2021114479-appb-000004
即A和B的交集比并集,C是A和B的最小的外接矩形;
整体的损失(Loss)函数可以写作:
Figure PCTCN2021114479-appb-000005
式中,b x、b y、b w和b h为预测值,
Figure PCTCN2021114479-appb-000006
Figure PCTCN2021114479-appb-000007
为标注值,C i
Figure PCTCN2021114479-appb-000008
分别为预测值和标注值的置信度,
Figure PCTCN2021114479-appb-000009
为控制函数,表示网格i的第j个预测框中存在对象。
Figure PCTCN2021114479-appb-000010
表示网格i的第j个预测框中不存在对象,λ coord和λ noobj为引入的两个超参数,为了使含有检测目标的物体框占有的权重更大。
在可选的实施例中,由于使用YOLOv5算法引入了较多的框,可以通过非极大值抑制(Non-Maximum Suppression,NMS)操作对一些交叠重复的目标物体的框进行删除。
由此,实现龋齿和/或牙结石的高效率、高精度的智能筛查。
在可选的实施例中,参考说明书附图5,步骤S133具体可以包括:
S1331:基于通过目标检测算法确定的龋齿和/或牙结石的位置信息,将口腔图像进行分割,得到存在目标物体的牙齿图片,目标物体是龋齿和/或牙结石;
S1332:使用卷积神经网络对牙齿图片进行分级,每个级别对应不同的目标物体的严重程度;
在可选的实施例中,使用卷积神经网络对目标物体的严重程度进行分类,可以分三类,0代表轻度,1代表中度,2代表重度。由此,实现对龋 齿和/或牙结石的严重程度的判断。
S1333:输出分类置信度,分类置信度越高,对应的目标物体的类别评估的准确度越高。
在可选的实施例中,利用卷积神经网络输出步骤S1331中每一个存在目标物体的牙齿图片的分类置信度。分类置信度越高,表明卷积神经网络认为该目标物体的类别评估结果的准确度越高。
可按照实际情况,使用分类置信度对识别结果进行过滤。
在可选的实施例中,步骤S134具体包括,根据步骤S132和步骤S133的输出结果,综合生成识别结果。
在可选的实施例中,步骤S13中,识别结果也可以仅包括牙齿的位置信息、牙齿是否存在龋齿的信息以及龋齿的严重程度分级信息,或仅包括牙齿的位置信息、牙齿是否存在牙结石的信息以及牙结石的严重程度分级信息。
在可选的实施例中,步骤S14中,还可以包括,服务器根据口腔图像获取用户牙齿数量、牙齿形态及用户口腔黏膜是否存在的信息。
在可选的实施例中,步骤S14中,还可以包括,服务器根据用户牙齿数量和牙齿形态对用户的年龄段进行判断。
在可选的实施例中,步骤S14中,在确定牙齿清洁参数时,可以根据用户的年龄段与牙齿数量,对儿童、老人及牙齿数量较少的用户,调整牙齿清洁参数中的刷牙时长,通过减少刷牙时长保护无牙区域的牙龈健康。
在可选的实施例中,步骤S15中,智能电动牙刷获取当前正在清洁的牙齿的位置信息可以通过陀螺仪实现,陀螺仪用于获取智能电动牙刷的位置、移动轨迹以及加速参数,根据智能电动牙刷使用过程中的姿态参数判断正在清洁的牙齿的位置信息。
在可选的实施例中,该方法还包括,服务器可以根据识别结果生成口腔健康报告并将口腔健康报告发送至指定终端。口腔健康报告可以包括用户口腔问题类型、分级信息及相关口腔图像。
在可选的实施例中,该方法还包括,服务器可以根据识别结果生成口腔健康报告并将口腔健康报告发送至指定终端。口腔健康报告可以包括用户口腔问题类型、分级信息及相关口腔图像。
在可选的实施例中,该方法还包括,服务器可以将多个口腔健康报告进行比对,生成口腔健康趋势报告,并将口腔健康趋势报告发送至指定终 端。
在可选的实施例中,该方法还包括,智能牙刷根据用户的按键操作进行拍照操作、刷牙操作和语音播放中的一种或多种。
本公开的有益效果为,通过获取用户的口腔图像,并通过目标检测算法和卷积神经网络对口腔图像进行识别分析,获取到精准的用户的龋齿、牙结石等相关的口腔健康信息,并据此对智能电动牙刷进行控制,为用户提供针对性的口腔清洗服务,提高了智能电动牙刷的清洗效果,提高用户体验;智能电动牙刷对口腔图像的有效性进行判断,避免了由于口腔图像不清晰、图片过曝或过暗以及成像角度问题等情况对识别结果产生的不良影响,进一步提高了识别结果的准确性;通过图像判断模块对无效图片的分析结果,控制语音播放模块进行相关的语音数据播放,以语音的形式对用户进行提示,便于用户获取更为直观的指导,以便获取有效的口腔图像,一方面,进一步提高识别结果的准确性,另一方面,提升用户体验。
另外,上述方法实施例与***实施例属于统一构思,其具体流程及相关有益效果可详见***实施例们这里不再赘述。
实施例3:
本申请一个实施例提供的智能电动牙刷,参考说明书附图6,应用于上述任一的基于人工智能图像识别调整电动牙刷的口腔健康管理***,具体可以包括,
图像获取模块101,用于采集口腔图像;
图像判断模块102,用于判断采集到的口腔图像是否为有效的口腔图像,若口腔图像有效,则由第一通讯模块发送至服务器进行识别,若口腔图像无效,则将图像无效的原因发送至主控模块;
第一通讯模块103,用于将有效的口腔图像发送至服务器,并接收服务器发回的识别结果或牙齿清洁参数,识别结果至少包括牙齿的位置信息、牙齿是否存在龋齿和/或牙结石的信息以及龋齿和/或牙结石的严重程度分级信息,牙齿清洁参数包括刷牙时长和/或振动频率;
定位模块104,用于获取正在清洁的牙齿的位置信息并发送至主控模块;
主控模块105,用于接收图像无效的原因,根据图像无效的原因在语音 数据库中选择相应的语音数据,并根据正在清洁的牙齿的位置信息,选择对应的牙齿清洁参数转化成控制信号实时发送至电机驱动模块;
电机驱动模块106,用于连接电机,并根据主控模块的控制信号驱动电机振动;
语音播放模块107,用于根据主控模块选择的语音数据,进行语音播放。
在可选的实施例中,还包括,
按键操作模块,用于接收用户的按键操作信号并发送至主控模块;此时主控模块,能够用于接收按键操作信号并转化成相应的控制信号发送至图像获取模块、电机驱动模块和语音播放模块。
另外,本实施例提供的智能电动牙刷与***实施例属于同一构思,其具体实现过程及相关有益效果可详见***实施例,这里不再赘述。
本公开的有益效果为,智能电动牙刷获取用户的口腔图像,并接收服务器发回的识别结果或牙齿清洁参数,并据此对智能电动牙刷进行控制,为用户提供针对性的口腔清洗服务,提高了智能电动牙刷的清洗效果,提高用户体验;智能电动牙刷对口腔图像的有效性进行判断,避免了由于口腔图像不清晰、图片过曝或过暗以及成像角度问题等情况对识别结果产生的不良影响,进一步提高了识别结果的准确性;通过图像判断模块对无效图片的分析结果,控制语音播放模块进行相关的语音数据播放,以语音的形式对用户进行提示,便于用户获取更为直观的指导,以便获取有效的口腔图像,一方面,进一步提高识别结果的准确性,另一方面,提升用户体验。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置、设备及存储介质的实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上仅为本公开的较佳实施例,并不用以限制本公开,凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (10)

  1. 基于人工智能图像识别调整电动牙刷的口腔健康管理***,其特征在于,包括智能电动牙刷和服务器,
    其中,智能电动牙刷包括,
    图像获取模块,用于采集口腔图像;
    图像判断模块,用于判断采集到的口腔图像是否为有效的口腔图像,若口腔图像有效,则由第一通讯模块发送至服务器进行识别,若口腔图像无效,则将图像无效的原因发送至主控模块;
    第一通讯模块,用于将有效的口腔图像发送至服务器,并接收服务器发回的识别结果或牙齿清洁参数,所述识别结果至少包括牙齿的位置信息、牙齿是否存在龋齿和/或牙结石的信息以及龋齿和/或牙结石的严重程度分级信息,所述牙齿清洁参数包括刷牙时长和/或振动频率;
    定位模块,用于获取正在清洁的牙齿的位置信息并发送至主控模块;
    主控模块,用于根据正在清洁的牙齿的位置信息,选择对应的牙齿清洁参数转化成控制信号实时发送至电机驱动模块,并在接收到所述图像判断模块发送的图像无效的原因时,根据所述图像无效的原因在语音数据库中选择相应的语音数据;
    电机驱动模块,用于连接电机,并根据主控模块的控制信号驱动电机振动;和
    语音播放模块,用于根据主控模块选择的语音数据,进行语音播放;
    所述服务器包括,
    识别模块,用于对接收到的所述口腔图像进行识别,生成识别结果,识别方法包括获取口腔图像、通过目标检测算法确定龋齿和/或牙结石的位置信息、通过卷积神经网络确定龋齿和/或牙结石的严重程度分级信息和生成识别结果;和
    第二通讯模块,用于接收智能电动牙刷发出的口腔图像,并向智能电动牙刷发送识别结果或牙齿清洁参数;
    还包括参数确定模块,用于根据所述识别结果确定不同口腔区域对应的牙齿清洁参数,所述参数确定模块设置在智能电动牙刷或服务器上。
  2. 根据权利要求1所述的基于人工智能图像识别调整电动牙刷的口腔健康管理***,其特征在于,所述智能电动牙刷还包括,
    按键操作模块,用于接收用户的按键操作信号并发送至主控模块;和
    主控模块,能够用于接收按键操作信号并转化成相应的控制信号发送至图像获取模块、电机驱动模块和语音播放模块。
  3. 根据权利要求1所述的基于人工智能图像识别调整电动牙刷的口腔健康管理***,其特征在于,
    在服务器的识别模块中,所述通过目标检测算法确定龋齿和/或牙结石的位置信息包括,
    将接收到的口腔图像进行分割,获得S×S个网格;
    在每个网格中设定多个框;
    对每个框进行评估,评估内容包括,框内是否存在目标物体以及当框内存在目标物体时,所述目标物体的类别;
    删除不存在目标物体的框,确定存在目标物体的框的位置;
    以及,在服务器的识别模块中,所述通过卷积神经网络确定龋齿和/或牙结石的严重程度分级信息包括,
    基于通过目标检测算法确定的龋齿和/或牙结石的位置信息,将口腔图像进行分割,得到存在目标物体的牙齿图片,所述目标物体是龋齿和/或牙结石;
    使用卷积神经网络对所述牙齿图片进行分级,每个级别对应不同的目标物体的严重程度;
    输出分类置信度,所述分类置信度越高,对应的目标物体的类别评估的准确度越高。
  4. 根据权利要求1所述的基于人工智能图像识别调整电动牙刷的口腔健康管理***,其特征在于,所述服务器还包括,
    口腔健康报告生成模块,用于根据识别结果生成口腔健康报告;和
    第二通讯模块,能够用于将口腔健康报告发送至指定终端。
  5. 基于人工智能图像识别调整电动牙刷的智能电动牙刷的控制方法,其特征在于,应用于权利要求1-4任一所述的基于人工智能图像识别调整电动牙刷的口腔健康管理***,包括以下步骤,
    智能电动牙刷采集口腔图像;
    智能电动牙刷判断采集到的口腔图像是否有效,若口腔图像有效,则将有效的口腔图像上传至服务器,若口腔图像无效,则发出相应语音提示;
    服务器对接收到的所述口腔图像进行识别,生成识别结果,所述识别结果至少包括牙齿的位置信息、牙齿是否存在龋齿和/或牙结石的信息以及龋齿和/或牙结石的严重程度分级信息;
    服务器根据所述识别结果确定不同口腔区域对应的牙齿清洁参数并发送至智能电动牙刷,或服务器将所述识别结果发送至智能电动牙刷,智能电动牙刷根据所述识别结果确定不同口腔区域对应的牙齿清洁参数,所述牙齿清洁参数包括刷牙时长和/或振动频率;
    智能电动牙刷获取当前正在清洁的牙齿的位置信息;
    智能电动牙刷根据当前正在清洁的牙齿的位置信息,选择对应的牙齿清洁参数;
    智能电动牙刷根据当前牙齿清洁参数控制电机振动;
    其中,所述服务器对接收到的口腔图像进行识别,并生成识别结果具体包括,
    获取口腔图像;
    通过目标检测算法确定龋齿和/或牙结石的位置信息;
    通过卷积神经网络确定龋齿和/或牙结石的严重程度分级信息;
    生成识别结果。
  6. 根据权利要求5所述的智能电动牙刷的控制方法,其特征在于,
    所述智能电动牙刷根据用户的按键操作进行拍照操作、刷牙操作和语音播放中的一种或多种。
  7. 根据权利要求5所述的智能电动牙刷的控制方法,其特征在于,所述通过目标检测算法确定龋齿和/或牙结石的位置信息包括,
    将接收到的口腔图像进行分割,获得S×S个网格;
    在每个网格中设定多个框;
    对每个框进行评估,评估内容包括,所述框内是否存在目标物体以及当框内存在目标物体时,所述目标物体的类别;
    删除不存在目标物体的框,确定存在目标物体的框的位置;
    以及,所述通过卷积神经网络确定龋齿和/或牙结石的严重程度分级信息包括,
    基于通过目标检测算法确定的龋齿和/或牙结石的位置信息,将口腔图像进行分割,得到存在目标物体的牙齿图片,所述目标物体是龋齿和/或牙结石;
    使用卷积神经网络对所述牙齿图片进行分级,每个级别对应不同的目标物体的严重程度;
    输出分类置信度,所述分类置信度越高,对应的目标物体的类别评估的准确度越高。
  8. 根据权利要求5所述的智能电动牙刷的控制方法,其特征在于,还包括以下步骤,
    服务器根据识别结果生成口腔健康报告;
    服务器将所述口腔健康报告发送至指定终端。
  9. 智能电动牙刷,应用于权利要求1-4任一所述的基于人工智能图像识别调整电动牙刷的口腔健康管理***,其特征在于,包括,
    图像获取模块,用于采集口腔图像;
    图像判断模块,用于判断采集到的口腔图像是否为有效的口腔图像,若口腔图像有效,则由所述第一通讯模块发送至服务器进行识别,若口腔图像无效,则将图像无效的原因发送至主控模块;
    第一通讯模块,用于将有效的口腔图像发送至服务器,并接收服务器发回的识别结果或牙齿清洁参数,所述识别结果至少包括牙齿的位置信息、牙齿是否存在龋齿和/或牙结石的信息以及龋齿和/或牙结石的严重程度分级信息,所述牙齿清洁参数包括刷牙时长和/或振动频率;
    定位模块,用于获取正在清洁的牙齿的位置信息并发送至主控模块;
    主控模块,用于接收图像无效的原因,根据所述图像无效的原因在语 音数据库中选择相应的语音数据,并根据正在清洁的牙齿的位置信息,选择对应的牙齿清洁参数转化成控制信号实时发送至电机驱动模块;
    电机驱动模块,用于连接电机,并根据主控模块的控制信号驱动电机振动;
    语音播放模块,用于根据主控模块选择的语音数据,进行语音播放。
  10. 根据权利要求9所述的智能电动牙刷,其特征在于,还包括,
    按键操作模块,用于接收用户的按键操作信号并发送至主控模块;和
    主控模块,能够用于接收按键操作信号并转化成相应的控制信号发送至图像获取模块、电机驱动模块和语音播放模块。
PCT/CN2021/114479 2021-06-17 2021-08-25 基于人工智能图像识别调整电动牙刷的口腔健康管理*** WO2022262116A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/497,714 US20240065429A1 (en) 2021-06-17 2023-10-30 Intelligent visualizing electric tooth brush

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110669700.XA CN113244009B (zh) 2021-06-17 2021-06-17 基于人工智能图像识别调整电动牙刷的口腔健康管理***
CN202110669700.X 2021-06-17

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/497,714 Continuation-In-Part US20240065429A1 (en) 2021-06-17 2023-10-30 Intelligent visualizing electric tooth brush

Publications (1)

Publication Number Publication Date
WO2022262116A1 true WO2022262116A1 (zh) 2022-12-22

Family

ID=77188336

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/114479 WO2022262116A1 (zh) 2021-06-17 2021-08-25 基于人工智能图像识别调整电动牙刷的口腔健康管理***

Country Status (3)

Country Link
US (1) US20240065429A1 (zh)
CN (1) CN113244009B (zh)
WO (1) WO2022262116A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058526A (zh) * 2023-10-11 2023-11-14 创思(广州)电子科技有限公司 一种基于人工智能的自动货物识别方法及***

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113244009B (zh) * 2021-06-17 2021-11-09 深圳市弘玉信息技术有限公司 基于人工智能图像识别调整电动牙刷的口腔健康管理***
CN113768468B (zh) * 2021-09-23 2023-12-19 广州华视光学科技有限公司 一种多传感器、多功能的口腔问题定位设备和方法
KR20230102720A (ko) * 2021-12-30 2023-07-07 주식회사 큐티티 구강 이미지 학습 및 분류 장치 및 방법
CN114271978A (zh) * 2022-01-27 2022-04-05 广州华视光学科技有限公司 一种电动牙刷的控制方法、装置、***和电子设备
CN116777818B (zh) * 2022-03-11 2024-05-24 广州星际悦动股份有限公司 口腔清洁方案确定方法、装置、电子设备及存储介质
CN117608712A (zh) * 2023-09-13 2024-02-27 广州星际悦动股份有限公司 一种信息显示方法、装置、存储介质及电子设备
CN117462286A (zh) * 2023-11-24 2024-01-30 广州星际悦动股份有限公司 交互控制方法、装置、电子设备、口腔护理设备和介质
CN117796944A (zh) * 2023-12-29 2024-04-02 广州星际悦动股份有限公司 牙刷的控制方法、装置、牙刷及计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106806032A (zh) * 2015-11-30 2017-06-09 英业达科技有限公司 电动牙刷***
KR101800670B1 (ko) * 2017-03-02 2017-11-24 아람휴비스 주식회사 다기능을 갖는 전동 칫솔
CN110856667A (zh) * 2018-08-22 2020-03-03 珠海格力电器股份有限公司 牙齿清洁方法、设备、装置和存储介质
CN111227974A (zh) * 2020-01-23 2020-06-05 亚仕科技(深圳)有限公司 刷牙策略的生成方法及相关装置
CN112120391A (zh) * 2020-09-23 2020-12-25 曹庆恒 一种牙刷及其使用方法
CN113244009A (zh) * 2021-06-17 2021-08-13 深圳市弘玉信息技术有限公司 基于人工智能图像识别调整电动牙刷的口腔健康管理***

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9589374B1 (en) * 2016-08-01 2017-03-07 12 Sigma Technologies Computer-aided diagnosis system for medical images using deep convolutional neural networks
CN106225174B (zh) * 2016-08-22 2020-10-13 珠海格力电器股份有限公司 空调器控制方法和***及空调器
CN107714222A (zh) * 2017-10-27 2018-02-23 南京牙小白健康科技有限公司 一种带有语音交互的儿童电动牙刷和使用方法
CN111191137A (zh) * 2019-12-31 2020-05-22 广州皓醒湾科技有限公司 基于牙齿颜色确定刷牙推荐方案的方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106806032A (zh) * 2015-11-30 2017-06-09 英业达科技有限公司 电动牙刷***
KR101800670B1 (ko) * 2017-03-02 2017-11-24 아람휴비스 주식회사 다기능을 갖는 전동 칫솔
CN110856667A (zh) * 2018-08-22 2020-03-03 珠海格力电器股份有限公司 牙齿清洁方法、设备、装置和存储介质
CN111227974A (zh) * 2020-01-23 2020-06-05 亚仕科技(深圳)有限公司 刷牙策略的生成方法及相关装置
CN112120391A (zh) * 2020-09-23 2020-12-25 曹庆恒 一种牙刷及其使用方法
CN113244009A (zh) * 2021-06-17 2021-08-13 深圳市弘玉信息技术有限公司 基于人工智能图像识别调整电动牙刷的口腔健康管理***

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058526A (zh) * 2023-10-11 2023-11-14 创思(广州)电子科技有限公司 一种基于人工智能的自动货物识别方法及***

Also Published As

Publication number Publication date
CN113244009A (zh) 2021-08-13
CN113244009B (zh) 2021-11-09
US20240065429A1 (en) 2024-02-29

Similar Documents

Publication Publication Date Title
WO2022262116A1 (zh) 基于人工智能图像识别调整电动牙刷的口腔健康管理***
US9716831B2 (en) Imaging control apparatus, imaging control method, and program
US9060158B2 (en) Image pickup apparatus that continuously takes images to obtain multiple images, control method therefor, and storage medium
US9747492B2 (en) Image processing apparatus, method of processing image, and computer-readable storage medium
CN103888659B (zh) 摄像装置、摄像方法以及计算机可读取的存储介质
KR100815512B1 (ko) 촬상장치 및 그 제어 방법
JP5594133B2 (ja) 音声信号処理装置、音声信号処理方法及びプログラム
US7430369B2 (en) Image capture apparatus and control method therefor
US8605158B2 (en) Image pickup control apparatus, image pickup control method and computer readable medium for changing an image pickup mode
US20060290795A1 (en) Image pickup apparatus
US20050200722A1 (en) Image capturing apparatus, image capturing method, and machine readable medium storing thereon image capturing program
CN107277355A (zh) 摄像头切换方法、装置及终端
JP2004317699A (ja) デジタルカメラ
CN102111540A (zh) 图像处理装置、图像处理方法以及程序
CN102158646A (zh) 成像控制设备、成像设备、成像控制方法及程序
US20190332952A1 (en) Learning device, image pickup apparatus, image processing device, learning method, non-transient computer-readable recording medium for recording learning program, display control method and inference model manufacturing method
CN110298310A (zh) 图像处理方法及装置、电子设备和存储介质
CN109819229A (zh) 图像处理方法及装置、电子设备和存储介质
US10970868B2 (en) Computer-implemented tools and methods for determining optimal ear tip fitment
CN103248815A (zh) 摄像装置、摄像方法
CN106657801A (zh) 一种视频信息采集方法及装置
CN105872352A (zh) 一种拍摄照片的方法及装置
CN110581954A (zh) 一种拍摄对焦方法、装置、存储介质及终端
JP2017224335A (ja) 情報処理装置、情報処理方法、及びプログラム
US20110221949A1 (en) Shooting apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21945697

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE