[en] Many of the machine learning tasks focus on cen-tralized learning (CL), which requires the transmission of localdatasets from the clients to a parameter server (PS) entailing hugecommunication overhead. To overcome this, federated learning(FL) has been a promising tool, wherein the clients send only themodel updates to the PS instead of the whole dataset. However,FL demands powerful computational resources from the clients.Therefore, not all the clients can participate in training if they donot have enough computational resources. To address this issue,we introduce a more practical approach called hybrid federatedand centralized learning (HFCL), wherein only the clients withsufficient resources employ FL, while the remaining ones sendtheir datasets to the PS, which computes the model on behalf ofthem. Then, the model parameters corresponding to all clientsare aggregated at the PS. To improve the efficiency of datasettransmission, we propose two different techniques: increasedcomputation-per-client and sequential data transmission. TheHFCL frameworks outperform FL with up to20%improvementin the learning accuracy when only half of the clients perform FLwhile having50%less communication overhead than CL since allthe clients collaborate on the learning process with their datasets.