• LOGIN
Repository logo

BORIS Portal

Bern Open Repository and Information System

  • Publication
  • Projects
  • Funding
  • Research Data
  • Organizations
  • Researchers
  • LOGIN
Repository logo
Unibern.ch
  1. Home
  2. Publications
  3. A SAM Based Tool for Semi-Automatic Food Annotation Comparing Food
 

A SAM Based Tool for Semi-Automatic Food Annotation Comparing Food

Options
  • Details
BORIS DOI
10.48620/87902
Date of Publication
October 16, 2024
Publication Type
Article
Division/Institute

ARTORG Center - Artif...

ARTORG Center for Bio...

Graduate School for C...

Author
Abdur Rahman, Lubnaa
ARTORG Center - Artificial Intelligence in Health and Nutrition
Graduate School for Cellular and Biomedical Sciences (GCB)
Papathanail, Ioannis
ARTORG Center for Biomedical Engineering Research
ARTORG Center - Artificial Intelligence in Health and Nutrition
Brigato, Lorenzo
ARTORG Center for Biomedical Engineering Research
Mougiakakou, Stavroula
ARTORG Center for Biomedical Engineering Research
ARTORG Center - Artificial Intelligence in Health and Nutrition
Series
Frontiers in Artificial Intelligence and Applications
ISSN or ISBN (if monograph)
0922-6389
1879-8314
Publisher
IOS Press
Language
English
Publisher DOI
10.3233/FAIA241033
Description
The advancement of artificial intelligence (AI) in food and nutrition research is hindered by a critical bottleneck: the lack
of annotated food data. Despite the rise of highly efficient AI models designed for tasks such as food segmentation and classification, their practical application might necessitate proficiency in AI and machine learning principles, which can act as a challenge for non AI experts in the field of nutritional sciences. Alternatively, it highlights the need to translate AI models into user-friendly tools that are accessible to all. To address this, we present a demo of a semiautomatic food image annotation tool leveraging the Segment Anything Model (SAM) [15]. The tool enables prompt-based food segmentation via user interactions, promoting user engagement and allowing them to further categorise food items within meal images and specify weight/volume if necessary. Additionally, we release a finetuned version of SAM’s mask decoder, dubbed MealSAM, with the ViT-B backbone tailored specifically for food image segmentation. Our objective is not only to contribute to the field by encouraging participation, collaboration, and the gathering of more annotated food data but also to make AI technology available for a broader audience by translating AI into practical tools.
Handle
https://boris-portal.unibe.ch/handle/20.500.12422/210671
Show full item
File(s)
FileFile TypeFormatSizeLicensePublisher/Copright statementContent
FAIA-392-FAIA241033 (2).pdftextAdobe PDF925.31 KBAttribution-NonCommercial (CC BY-NC 4.0)publishedOpen
BORIS Portal
Bern Open Repository and Information System
Build: 360c85 [14.04. 8:05]
Explore
  • Projects
  • Funding
  • Publications
  • Research Data
  • Organizations
  • Researchers
More
  • About BORIS Portal
  • Send Feedback
  • Cookie settings
  • Service Policy
Follow us on
  • Mastodon
  • YouTube
  • LinkedIn
UniBe logo