A SAM Based Tool for Semi-Automatic Food Annotation Comparing Food
Options
BORIS DOI
Date of Publication
October 16, 2024
Publication Type
Article
Division/Institute
Author
Series
Frontiers in Artificial Intelligence and Applications
ISSN or ISBN (if monograph)
0922-6389
1879-8314
Publisher
IOS Press
Language
English
Publisher DOI
Description
The advancement of artificial intelligence (AI) in food and nutrition research is hindered by a critical bottleneck: the lack
of annotated food data. Despite the rise of highly efficient AI models designed for tasks such as food segmentation and classification, their practical application might necessitate proficiency in AI and machine learning principles, which can act as a challenge for non AI experts in the field of nutritional sciences. Alternatively, it highlights the need to translate AI models into user-friendly tools that are accessible to all. To address this, we present a demo of a semiautomatic food image annotation tool leveraging the Segment Anything Model (SAM) [15]. The tool enables prompt-based food segmentation via user interactions, promoting user engagement and allowing them to further categorise food items within meal images and specify weight/volume if necessary. Additionally, we release a finetuned version of SAM’s mask decoder, dubbed MealSAM, with the ViT-B backbone tailored specifically for food image segmentation. Our objective is not only to contribute to the field by encouraging participation, collaboration, and the gathering of more annotated food data but also to make AI technology available for a broader audience by translating AI into practical tools.
of annotated food data. Despite the rise of highly efficient AI models designed for tasks such as food segmentation and classification, their practical application might necessitate proficiency in AI and machine learning principles, which can act as a challenge for non AI experts in the field of nutritional sciences. Alternatively, it highlights the need to translate AI models into user-friendly tools that are accessible to all. To address this, we present a demo of a semiautomatic food image annotation tool leveraging the Segment Anything Model (SAM) [15]. The tool enables prompt-based food segmentation via user interactions, promoting user engagement and allowing them to further categorise food items within meal images and specify weight/volume if necessary. Additionally, we release a finetuned version of SAM’s mask decoder, dubbed MealSAM, with the ViT-B backbone tailored specifically for food image segmentation. Our objective is not only to contribute to the field by encouraging participation, collaboration, and the gathering of more annotated food data but also to make AI technology available for a broader audience by translating AI into practical tools.
File(s)
File | File Type | Format | Size | License | Publisher/Copright statement | Content | |
---|---|---|---|---|---|---|---|
FAIA-392-FAIA241033 (2).pdf | text | Adobe PDF | 925.31 KB | Attribution-NonCommercial (CC BY-NC 4.0) | published |