• English
    • português (Brasil)
  • português (Brasil) 
    • English
    • português (Brasil)
  • Entrar
Ver registro 
  •   Repositório Institucional PUC-Campinas
  • Produção acadêmica e científica
  • Artigos de periódicos
  • Publicações
  • Ver registro
  •   Repositório Institucional PUC-Campinas
  • Produção acadêmica e científica
  • Artigos de periódicos
  • Publicações
  • Ver registro
JavaScript is disabled for your browser. Some features of this site may not work without it.

Fedclusterensemble: enhancing clustered federated learning through confidence-based ensembles and cyclic intra cluster client selection

Autor
Sousa, Artur Freitas
Akabane, Ademar Takeo
Estrella, Julio
Data de publicação
//2025
Tipo de conteúdo
Artigo
Direitos de acesso
Acesso aberto
Metadados
Mostrar registro completo
Resumo

Federated Learning (FL) has emerged as a transformative paradigm for decentralized model training, enabling multiple clients to collaboratively learn without sharing sensitive data. However, FL’s performance is often hindered by non-independent and identically distributed (non-IID) data across clients, resulting in substantial degradation in model accuracy. Clustered Federated Learning (CFL) addresses this challenge by grouping clients based on data similarity to improve training efficiency. Despite these advancements, existing CFL frameworks lack effective mechanisms to support inference-only clients—those that do not participate in training but require reliable model predictions. This paper introduces a novel ensemble-based strategy tailored for inference-only clients with highly skewed data distributions. Additionally, we propose a cyclic client selection strategy that significantly reduces communication overhead while maintaining model performance. Our framework constructs multiple client clusters based on data similarity, allowing each cluster to train models independently while adapting to diverse data distributions through hyperparameter tuning. During inference, a confidence-based ensemble method is employed, selecting the model with the highest predictive confidence for each test sample instead of conventional averaging or majority voting. This approach enhances predictive accuracy and ensures robust performance even when inference-only clients encounter previously unseen data distributions. Experiments conducted on benchmark datasets, including CIFAR-10 and SVHN, demonstrate that the proposed method outperforms FedAvg in both accuracy and communication efficiency, particularly in federated settings characterized by extreme data heterogeneity. Moreover, the framework is especially suitable for IoT applications—such as edge computing, sensor networks, and vehicular systems—where data heterogeneity and limited communication resources are critical challenges. These contributions advance the practical deployment of FL in real-world domains such as healthcare, IoT, and vehicular networks.

Palavras-chave
Federated learning
Intra cluster
Client selection
Non-IID data
Edge computing
Linguagem
Inglês
Este item aparece nas seguintes coleções:
  • Publicações

Pontifícia Universidade Católica de Campinas
Pontifícia Universidade Católica de Campinas
Entre em contato | Deixe sua opinião

 

Navegue

Todo o repositórioComunidades e ColeçõesPor data do documentoAutoresTítulosAssuntosEsta coleçãoPor data do documentoAutoresTítulosAssuntos

Minha conta

EntrarCadastro

Pontifícia Universidade Católica de Campinas
Pontifícia Universidade Católica de Campinas
Entre em contato | Deixe sua opinião