References of "Yang, Yang"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailSparsification and Optimization for Energy-Efficient Federated Learning in Wireless Edge Networks
Lei, Lei; Yuan, Yaxiong; Yang, Yang et al

in IEEE (2022)

Federated Learning (FL), as an effective decentral- ized approach, has attracted considerable attention in privacy- preserving applications for wireless edge networks. In practice, edge devices are ... [more ▼]

Federated Learning (FL), as an effective decentral- ized approach, has attracted considerable attention in privacy- preserving applications for wireless edge networks. In practice, edge devices are typically limited by energy, memory, and computation capabilities. In addition, the communications be- tween the central server and edge devices are with constrained resources, e.g., power or bandwidth. In this paper, we propose a joint sparsification and optimization scheme to reduce the energy consumption in local training and data transmission. On the one hand, we introduce sparsification, leading to a large number of zero weights in sparse neural networks, to alleviate devices’ computational burden and mitigate the data volume to be uploaded. To handle the non-smoothness incurred by sparsification, we develop an enhanced stochastic gradient descent algorithm to improve the learning performance. On the other hand, we optimize power, bandwidth, and learning parameters to avoid communication congestion and enable an energy-efficient transmission between the central server and edge devices. By collaboratively deploying the above two components, the numerical results show that the overall energy consumption in FL can be significantly reduced, compared to benchmark FL with fully-connected neural networks. [less ▲]

Detailed reference viewed: 12 (0 UL)
Full Text
Peer Reviewed
See detailProxSGD: Training Structured Neural Networks under Regularization and Constraints
Yang, Yang; Yuan, Yaxiong UL; Chatzimichailidis, Avraam et al

in International Conference on Learning Representations (ICLR) 2020 (2020)

Detailed reference viewed: 61 (4 UL)
Full Text
Peer Reviewed
See detailLoad Coupling and Energy Optimization in Multi-Cell and Multi-Carrier NOMA Networks
Lei, Lei UL; You, Lei; Yang, Yang et al

in IEEE Transactions on Vehicular Technology (2019)

Detailed reference viewed: 179 (21 UL)
Full Text
Peer Reviewed
See detailInexact Block Coordinate Descent Algorithms for Nonsmooth Nonconvex Optimization
Yang, Yang; Pesavento, Marius; Luo, Zhi-Quan et al

in IEEE Transactions on Signal Processing (2019)

In this paper, we propose an inexact block coordinate descent algorithm for large-scale nonsmooth nonconvex optimization problems. At each iteration, a particular block variable is selected and updated by ... [more ▼]

In this paper, we propose an inexact block coordinate descent algorithm for large-scale nonsmooth nonconvex optimization problems. At each iteration, a particular block variable is selected and updated by solving the original optimization problem with respect to that block variable inexactly. More precisely, a local approximation of the original optimization problem is solved. The proposed algorithm has several attractive features, namely, i) high flexibility, as the approximation function only needs to be strictly convex and it does not have to be a global upper bound of the original function; ii) fast convergence, as the approximation function can be designed to exploit the problem structure at hand and the stepsize is calculated by the line search; iii) low complexity, as the approximation subproblems are much easier to solve and the line search scheme is carried out over a properly constructed differentiable function; iv) guaranteed convergence of a subsequence to a stationary point, even when the objective function does not have a Lipschitz continuous gradient. Interestingly, when the approximation subproblem is solved by a descent algorithm, convergence of a subsequence to a stationary point is still guaranteed even if the approximation subproblem is solved inexactly by terminating the descent algorithm after a finite number of iterations. These features make the proposed algorithm suitable for large-scale problems where the dimension exceeds the memory and/or the processing capability of the existing hardware. These features are also illustrated by several applications in signal processing and machine learning, for instance, network anomaly detection and phase retrieval. [less ▲]

Detailed reference viewed: 100 (4 UL)
Full Text
Peer Reviewed
See detailProtecting Elliptic Curve Cryptography Against Memory Disclosure Attacks
Yang, Yang; Guan, Zhi; Liu, Zhe UL et al

in Shi, Elaine; Yiu, S.M. (Eds.) Information and Communications Security (2014, December)

Detailed reference viewed: 134 (4 UL)