![]() ; ; et al in Neural Processing Letters (2022) Rain attenuation events are one of the foremost drawbacks in satellite communications, impairing satellite link availability. For this reason, it is necessary to foresee rain events to avoid an outage of ... [more ▼] Rain attenuation events are one of the foremost drawbacks in satellite communications, impairing satellite link availability. For this reason, it is necessary to foresee rain events to avoid an outage of the satellite link. In this paper, we propose and develop a method based on Machine Learning to predict events of rain attenuation without appealing to complex mathematical models. To be specific, we implement a Long–short term memory architecture that is a Deep Learning algorithm based on an artificial recurrent neural network. Furthermore, supervised learning is the learning task for our algorithms. For this purpose, rain attenuation time-series feed the Long–short term memory network at the input to train it. However, the lack of a rainfall database hinders the development of a reliable prediction method. Therefore, we generate a synthetic rain attenuation database by using the recommendations of the International Telecommunication Union. Each model is trained and validated by computational experiments, employing statistical metrics to find the most accurate and reliable models. Thus, the accuracy metric compares the outcomes of the proposal with other related methods and models. As a result, our best model reaches an accuracy of 91.88% versus 87.99% from the external best model, demonstrating superiority over other models/methods. On average, our proposal accuracy reaches a value of 88.08%. Finally, we find out that this proposal can contribute efficiently to improving the performance of satellite system networks by re-routing data traffic or increasing link availabilities, taking advantage of the prediction of rain attenuation events. [less ▲] Detailed reference viewed: 60 (10 UL)![]() Antonelo, Eric Aislan ![]() in Neural Processing Letters (2007), 26(3), 233--249 Detailed reference viewed: 90 (2 UL)![]() Vlassis, Nikos ![]() in Neural Processing Letters (2002), 15(1), 77-87 Learning a Gaussian mixture with a local algorithm like EM can be difficult because (i) the true number of mixing components is usually unknown, (ii) there is no generally accepted method for parameter ... [more ▼] Learning a Gaussian mixture with a local algorithm like EM can be difficult because (i) the true number of mixing components is usually unknown, (ii) there is no generally accepted method for parameter initialization, and (iii) the algorithm can get trapped in one of the many local maxima of the likelihood function. In this paper we propose a greedy algorithm for learning a Gaussian mixture which tries to overcome these limitations. In particular, starting with a single component and adding components sequentially until a maximum number k, the algorithm is capable of achieving solutions superior to EM with k components in terms of the likelihood of a test set. The algorithm is based on recent theoretical results on incremental mixture density estimation, and uses a combination of global and local search each time a new component is added to the mixture. [less ▲] Detailed reference viewed: 71 (0 UL)![]() ![]() Vlassis, Nikos ![]() in Neural Processing Letters (1999), 9(1), 63-76 Detailed reference viewed: 52 (0 UL) |
||