5G Networks have been a hot topic for the last few years. And surprisingly, it has become more famous after the coronavirus spread since some people claimed that 5G caused the pandemic although there is no evidence. Leaving all the nonsense discussions aside, let’s focus on the developments coming with 5G.
The main advantage of 5G is using milimeter waves. These waves have higher frequencies compared to traditional generations, hence allow us to use wider bandwidths which also increase the data rates. It means that we will have much faster connections, more devices connected to Internet concurrently and less latency. But all these come with a cost: milimeter waves are easily affected by atmospheric conditions, buildings or any physical obstacles. Unlike traditional low frequency signals, milimeter waves easily fade and can’t go for long distances. For this reason, new concepts are introduced in telecommunication. We can categorize these concepts as “5 Foundations of 5G”:
1- Milimeter Waves
2- Small Cells
3- Massive MIMO
5- Full Duplex
In this post, we will be focusing on Artificial Intelligence methods for 5G. So the main interest will be on the 3rd and the 4th features. Let’s make a brief explanation of these technical terms.
As we discussed for milimeter waves above, wireless communication suffers from attenuation ( power loss ) and interference ( spectrum collision with other signals ). Massive MIMO (Multiple Input Multiple Output) systems offer a solution against these problems. Many antenna arrays are placed in the base station. And instead of linking a single antenna to a single user, multiple antennas are linked to multiple users.
In traditional communication systems, radial ( circular ) radiation is used. It means that the signals are radiated in every direction. However, users are not located uniformly over a circle. Rather, they are located on some discrete positions. This situation makes the use of circular radiation highly inefficient. It wastes energy and also may cause interference. A solution to this problem would be to create more directive signals which aim specific users rather than a circular target.
Each antenna ( or antenna array ) is linked to a specific target by amplitude and phase adjustment. We can visualize Beamforming solution as below. Each device is connected to the corresponding antenna by a directed signal. Beamforming method allows us to avoid consuming extra spectrum, saves energy and reduces interference.
Having explained Massive MIMO and Beamforming concepts, let’s explore how we can apply Artificial Intelligence solutions for these concepts.
For base stations, it is often not easy to decide the optimal beamforming adjustment since they don’t have computationally powerful resources. On-site optimization processes would take a lot of time and power which the base stations can’t afford. To solve this problem, an external data training can be applied and only the prediction process might be left to the base station. Having a pre-built machine learning model would drastically reduce the time and power needed for beamforming optimization.
Unlike other application fields of machine learning such as Computer Vision or Natural Language Processing, obtaining the real data is not as easy in Communication Systems. Especially for 5G, it is quite expensive to generate real data due to setup of various instruments. We should find a simple way to reach the data without having to spend a huge capital.
Generative Adversarial Networks ( GANs ) play a major role in creating unseen data. It has quite popular applications like re-drawing Renaissance paintings by examining the existing paintings. GANs would be a perfect fit for communication systems when it comes to create real-looking data.
So, where to use GANs in 5G? Since location estimation is quite important for beamforming, we can create user locations based on some probabilistic distributions. While a neural network ( GAN ) is responsible for creating user location data, another neural network will output a corresponding antenna diagram including the amplitude and the phase of the signal.
Here’s the challenge! In traditional applications, data is labeled. Hence, we know the expected output and are able to calculate a loss, either categorical or continous. But here, there’s no pre-defined output which doesn’t allow us to perform loss calculation. Since we are not able to calculate loss, we can try another way to overcome this issue. Defining a reward function would help us decide if the current antenna diagram is appropriate. If we go one step back and look at the whole picture, we will see that we came up with Reinforcement Learning model. One network is producing data, one is creating an antenna diagram and the last one is responsible for rewarding the output.
Without diving into details about telecommunication terms, we can say that using the function below as a reward function will improve the throughput of the system. The third neural network returns a reward which leads the second neural network to update the antenna diagram as below.
These steps are repeated until we satisfy the throughput criterion below:
We discussed the future of 5G by emphasizing 2 fundamental components. We also explored how AI can be applied to solve 5G problems. Since communication systems suffer data availability for machine learning models, supervised training is not a possible solution. Instead, a Reinforcement Learning model consisting of 3 different neural networks was implemented. This model aims to maximize the throughput of the system, hence to offer better service to users.
Telecommunication companies like Nokia make use of Machine Learning solutions for communication problems. You can check out the papers and the news below for further information.