Skip to main content
Uncategorized

WiGenAI: The Symphony of Wireless and Generative AI via Diffusion Models

October 11, 2023

Mehdi Letafati, Student Member, IEEE, Samad Ali, Member, IEEE, and Matti Latva-aho, Senior Member, IEEE

Abstract—Innovative foundation models, such as GPT-3 and stable diffusion models, have made a paradigm shift in the realm of artificial intelligence (AI) towards generative AI-based systems. In unison, from data communication and networking perspective, AI and machine learning (AI/ML) algorithms are envisioned to be pervasively incorporated into the future generations of wireless communications systems, highlighting the need for novel AI-native solutions for the emergent communication scenarios. In this article, we outline the applications of generative AI in wireless communication systems to lay the foundations for research in this field. Diffusion-based generative models, as the new state-of-the-art paradigm of generative models, are introduced, and their applications in wireless communication systems are discussed. Two case studies are also presented to showcase how diffusion models can be exploited for the development of resilient AI-native communication systems. Specifically, we propose denoising diffusion probabilistic models (DDPM) for a wireless communication scheme with non-ideal transceivers, where 30% improvement is achieved in terms of bit error rate. As the second application, DDPMs are employed at the transmitter to shape the constellation symbols, highlighting a robust out-of-distribution performance. Finally, future directions and open issues for the development of generative AI-based wireless systems are discussed to promote future research endeavors towards wireless generative AI (WiGenAI).

Index Terms—Generative AI, diffusion models, AI-native wireless, network resilience, wireless AI.

I. INTRODUCTION

Thanks to the brilliant performance showcased by generative pre-trained transformers (GPT) and diffusion models, generative artificial intelligence (GenAI) has received significant attention from both academia and industry, garnering extensive research and development efforts. The evolution of diffusion models, as the state-of-the-art family of generative models, is considered as one of the key enablers in the recent breakthroughs in the field of GenAI [1], with famous solutions such as DALL.E 3 by OpenAI, and ImageGen by Google Brain, to name a few. In concert with the swift progress in AI algorithms, the sixth generation (6G) of communication networks is envisioned to introduce “native intelligence” as a key component in system design [2]. This brings to the forefront the call for cutting-edge AI solutions for developing “AI-native” wireless systems.

The research on state-of-the-art generative models (particularly the diffusion model and its variants) continues to thrive across different domains of computer science community, such as natural language processing (NLP), computer vision, and medical imaging [3], as shown in Fig. 1. Nevertheless, in spite the fact that there exists a strong connection between the functional mechanism of diffusion models and the communication engineering problems, there have been only a few works that have investigated the potential merits of diffusion models for wireless systems [4]–[11]. Notably, the incorporation of GenAI into wireless communication problems is still in its infancy, and our goal in this article is to shed light on some of the potential directions.

To have a literature review on GenAI-based wireless, the authors in [4] propose a diffusion model-assisted workflow for wireless networks. As a hybrid approach, they exploit diffusion models to improve the exploration ability of reinforcement learning algorithms for network management. In [5], diffusion models are utilized to synthetically generate channel realizations for an end-to-end (E2E) wireless system. The results highlight a promising performance for diffusion models compared to generative adversarial network (GANs). The authors show that GANs have unstable training and less diversity in generation performance, while diffusion models maintain a more stable training process and a better generalization during inference. The authors in [6] propose employing a diffusion model to improve the performance of receiver in terms of noise and channel estimation error removal. The idea is further extended in [7] for a communication system with non-ideal transceivers, showcasing more than 25 dB improvement in reconstruction performance compared to deep neural network (DNN)-based receivers. As a hybrid scheme proposed in [8], diffusion models are employed in con-junction with neural networks for deep learning-based joint source-channel coding to complement digital communication schemes. The results show improvements in the reconstruction quality when diffusion models are employed. In [9], a diffusion model is employed for the application of channel estimation in multi-antenna wireless systems. The results imply a competitive performance for both in-distribution and out-of-distribution (OOD) scenarios compared to GANs. In [10], diffusion model-aided single-channel source separation is proposed for digital communications. Simulation results over BPSK and QPSK modulation schemes demonstrate the outperformance of diffusion models over conventional DNN-based methods in terms of bit error rate (BER) and mean-squared error (MSE) metrics. In a recent work [11], a novel algorithm for PHY signal design with GenAI is proposed using diffusion models. The results highlight a significant performance gain compared to convectional DNNs, which can be considered as an initiative for reinventing the future generations of wireless communications.

The applications of GenAI, specifically diffusion models, in communications engineering are at an early developmental phase, implying a relatively nascent endeavor with ample room for in-depth exploration. The current body of literature

Fig. 1: Taxonomy of diffusion model applications. We study the applications of diffusion models in communication systems.

in this domain has tried to demonstrate initial advancements in understanding some of the roles diffusion models can play in wireless communications. However, there remains a notable research gap pertaining to the incorporation of state-of-the-art generative models into AI-native wireless systems. Notably, as new communication and networking technologies emerge, there is still limited research on the possible applications, benefits, and drawbacks of utilizing GenAI algorithms for the future generations of wireless systems.

In this article, we first provide the background on why and how diffusion models can address the problems in wireless systems. We further delve into wireless GenAI (WiGenAI) via conducting two case studies. Specifically, we employ denoising diffusion probabilistic models (DDPM), as one of the state-of-the-art diffusion models proposed by Ho et. al, 2020 [1], for a practical finite precision hardware-impaired communication. We demonstrate the resilience of this approach under low-SNR regimes. As the second case study, we employ diffusion models for constellation shaping to shed light on one of the possible solutions of GenAI for PHY engineering at the transmitter. We demonstrate the outperformance of GenAI-based solution compared to the DNN-based scheme, as well as robust OOD performance. We also provide visions and unexplored directions on the possible applications of GenAI algorithms in wireless communication systems.

The rest of the paper is organized as follows. We first introduce the fundamental concepts of diffusion models in Section II. Next, we shed light on some of the possible roles that diffusion models can play in wireless systems, which is studied in Section III. Section IV delves into realizing WiGenAI by carrying out two case studies, together with numerical experiments. As the finale to our article, Section V discusses the future directions and open issues, and Section VI concludes the article.

II. FUNDAMENTALS OF DIFFUSION MODELS

Diffusion models are a novel and powerful class of state-of-the-art probabilistic generative models that have showcased high sample quality, strong mode coverage, and sample diversity [1]. Like the mythical bird that rises from ashes, diffusion model is characterized by the transition from chaos (noise) to creation (data generation). In this section, we provide the fundamentals and key concepts behind diffusion-based generative modeling.

Key Concepts & Ideas: Generally speaking, the mechanism behind diffusion-based generative models is to decompose data generation process over small “denoising” steps, through which the diffusion model corrects itself and gradually generates desired samples. The key idea is that if we could develop a machine learning (ML) algorithm that is capable of learning the systematic decay of information due to noise, then it should be possible to “reverse” the process and recover the information back from the noisy/erroneous data. This is fundamentally different from previous generative models— instead of trying to directly learn distribution (like in GANs), or learning a latent space embedding (like in variational AEs), diffusion models diffuse data samples by adding noise using a Gaussian kernel, and then try to “decode” the information via “denoising” the perturbed data in a hierarchical fashion. In this article, when talking about diffusion models, we are mainly focused on DDPMs, as one of the state-of-the-art diffusion-based models proposed by Ho et. al, 2020 [1].

Underlying Mechanisms: As illustrated in Fig. 2, diffusion models are comprised of two processes, namely forward diffusion process and parametrized reverse process. Within the forward process, data is mapped into noise by using a Gaussian diffusion kernel, perturbing the input data gradually. More specifically, at each step of the forward diffusion, Gaussian noise is incrementally added to the data. The second process is a parametrized reverse process aiming to undo the forward diffusion as an iterative denoising process. The

Fig. 2: Overview of the diffusion-based generative models.

probabilistic model of the reverse process cannot be easily estimated, as it requires the knowledge of the distribution of all possible data samples to calculate the reverse conditional probabilities. Hence, a neural network is trained to approximate (learn) the reverse process.

Drawing an input sample from some unknown and possibly complicated distribution, the forward diffusion process is defined by adding Gaussian noise with adjustable variance at each time-steps, which is known as “variance scheduling” in ML literature. By doing so, data samples gradually lose their distinguishable features as the time-step goes on, such that after a sufficient number of steps, they approach an isotropic Gaussian distribution [1]. The variance scheduling is known and can be designed beforehand. Intuitively, this implies that by properly designing the variance scheduling in the diffusion process, the model would be able to “see” different structures of the distortion noise during training, making it robust against a wide range of distortion levels when sampling. Within the diffusion model framework, a neural network is trained to estimate the noise vector in the distorted data. Accordingly, the diffusion model is trained based on a loss function that calculates an error measure (e.g., MSE) between the true diffused noise term and the approximated noise. Then during the inference (sampling) phase, the reverse diffusion process is run to regenerate the true samples from noisy input.

III. APPLICATIONS OF DIFFUSION MODELS IN WIRELESSAI

In this section, we delve into the underlying connections between the diffusion model framework and the communication engineering problems, providing the directions on the potential applications of diffusion models for wireless AI.

Recall that the main intuition behind a diffusion model is to decompose data generation process over the so-called “denoising” steps, and gradually generating the desired samples out of noise. This highlights that the underlying mechanisms of diffusion models are similar to what we basically expect from a communication system to denoise, decode, and reconstruct the information signals from the noisy and distorted signals. Considering the top-down approach, a communication system consists of three main components, i.e., the transmitter, the communication channel, and the receiver. In this section, we address how diffusion models can be employed at the transmitting and receiving ends to realize GenAI-based communication schemes. Diffusion models are also capable of handling the channel component, which is also addressed in what follows.

A. GenAI at The Receiver

1) The First Perspective: The core idea of utilizing diffusion models at the receiver would be to design a communication system which instead of avoiding the imperfections incurred in real-world communications scenarios, handles the non-idealities by exploiting the “denoise-and-generate” properties of diffusion models. This is also aligned with the visions on resilient AI-native communication systems proposed in [2]. Typically, these non-idealities include the channel and hardware distortions, hardware impairments, finite precision arithmetic errors, channel estimation errors, and message decoding errors, which can affect the performance of the receiver if not properly dealt with. Those practical non-idealities can be removed from the received signals by running the diffusion model framework, starting from the distorted noisy received signals, and ending up with the denoised reconstructed version of the information signals. To be more specific, the idea is to first train a diffusion model framework to make the system learn the structure of different practical non-idealities within the batches of noisy samples. Then in the inference (sampling) phase, we start from the batch of received signals, run the diffusion model algorithm, and remove the hardware and channel distortions and residual errors, and reconstruct original samples. Taking advantage of diffusion models in removing distortions, imperfections, and errors, it can also bring about network resilience for wireless communication systems in low signal-to-noise ratio (SNR) regimes, where the communication systems are more prone to errors.

2) The Second Perspective: Another important vision on the role of GenAI in communication systems is to “automate” the message generation process at the receiver. With the aid of diffusion models, the underlying structure of the transmitted messages can be learned over time. Then instead of recon-structing the messages upon receiving a noisy signal (which might be challenging specially in low SNR regimes), the receiver can “generate” the message in a stand-alone manner. In other words, WiGenAI can bring a paradigm shift towards a diffusion model-assisted AI-native system, in which messages are not simply perceived as bit-pipelines. This can also be viewed as a promising way of realizing the beyond-level-A communications paradigm and semantic communication schemes [12]. Employing diffusion models at the receiver can also help improve the reliability of communication systems with respect to the aberrant behavior of wireless channel and network. This can be realized thanks to the fact that the receiver (equipped with GenAI block) can generate the messages, even though the communication system is experiencing a poor wireless link, overcoming wireless deficiencies with GenAI proficiencies.

B. GenAI at The Transmitter

Broadly speaking, the goal of employing GenAI algorithms at the transmitter would be to take a step towards an AI-native system, in which we can flexibly design PHY signals, adapt to changes, and provide “mutual understanding” among the communication parties. The core idea is that with the help of diffusion models, we can carry out waveform shaping or precoding design in a way that the information-bearing signals generated at the transmitter, and what is reconstructed at the receiver become as similar as possible, leading to as few mismatches as possible. This can also improve the information rate of the communication systems. In this regard, the transmitter, which is equipped with a diffusion model, can exploit the “denoise-and-generate” properties of diffusion models to “mimic” the way the receiver removes noise and extracts the information signals. Hence, the waveforms generated at the transmitter are shaped out of “synthetic noise,” in a similar manner to what will be done at the receiver end to recover them out of noisy signals [11]. By synthetically following the functionality of the receiver, the desired “similarity” and mutual understanding between the communication parties can be realized, as the signals are designed in a way that would be straightforward for the receiver to recover them. To further clarify the case, we will provide a concrete example of this novel idea in the next section by carrying out a case study on diffusion model-based constellation shaping for wireless systems.

C. GenAI for Communication Channel

In AI-native communication systems, AI/ML blocks are incorporated into the transmitting and/or receiving modules such that in an E2E fashion, learning algorithms can jointly optimize the communication blocks. An E2E framework consists of the encoder module, the decoder module, and a communication channel in between. Training such E2E AI-native systems with gradient-based optimizers requires the channel to be known and differentiable to back-propagate through the system. However, communication channels do not necessarily follow a tractable and differentiable mathematical models—one might not have access to the channel model in real-world scenarios, but to the samples from it. In such scenarios, diffusion models that have already showcased a high-quality generation performance in image-based tasks, can be considered a promising solution [5]. Notably, diffusion models can be employed to approximate and synthetically generate the samples of communication channel and/or its derivatives for E2E training. To elaborate, the GenAI-aided communication system will consist of the neural encoder, the neural decoder, and the diffusion-based generative model implemented between them, replacing the channel block. In [5], it is shown that diffusion models can be employed in E2E

Fig. 3: BER vs. transmit SNR under AWGN channel and non-Gaussian additive noise for our scheme and [13] as benchmark.

wireless communication scenarios and that they outperform GANs, having a more stable training procedure and a better generalization ability.

IV. CASE STUDIES

In this section, we delve into the WiGenAI paradigm via carrying out two case studies. For case studies, we employ DDPMs, as one of the state-of-the-art generative models proposed by Ho et. al, 2020 [1]. The DDPM framework is parameterized by a neural network with 3 conditional hidden layers (with Softplus activation functions and 128 neurons). Diffusion steps are incorporated into the model as time-step embeddings, and the output layer is a simple linear layer.

A. Case-study 1: Diffusion Model at The Receiver

We consider a communication system with non-ideal hardware-impaired transmitter and receiver according to [15]. In this case-study, we also consider a finite precision com-munication, in which a quantized version of samples are transmitted over the wireless channel. This is a practical assumption for communications over resource-limited wire-less networks, aiming to maintain a balance between energy efficiency and reconstruction accuracy. Then the DDPM-based receiver reconstructs the original samples from the received batch of distorted-and-quantized data.

Fig. 3 illustrates the reconstruction performance of the proposed scheme under low-SNR regimes from −25 to −5 dB.1 For the benchmark, we consider the model of [13] as one of the promising and well-known works on ML-based communications systems. In this experiment, we ex-amine the BER (averaged over 10 runs of sampling). The figure clearly highlights the performance of the DDPM-based

Fig. 4: Mutual information between the generated symbols at the transmitter and the decoded symbols at the receiver.

approach under low-SNR regimes and quantization errors. Notably, although the DNN benchmark does not show any noticeable performance under low-SNR regimes, our scheme can perform the reconstruction with lower error rates, such that 30% improvement in BER is achieved compared to the DNN-based receiver at −5 dB SNR. Similar trends also hold for non-Gaussian case, where Laplacian noise is considered with the same variance as that of AWGN scenario. This non-Gaussianity can happen due to the non-Gaussian interference in multi user scenarios. Remarkably, although we do not re-train our diffusion model for Laplacian noise, the performance is still better than the DNN benchmark by about 20%, highlighting the OOD performance of our approach.

B. Case-study 2: Diffusion Model at The Transmitter

This case study showcases an application of diffusion models at the transmitter. The goal is to incorporate GenAI algorithms into the communication systems for PHY signal design as presented in Section III-B. Accordingly, we show in this study that diffusion models can be employed at the transmitter to perform constellation shaping. In this case study, we run the DDPM at the transmitter to probabilistically shape (generate) the constellation symbols according to the channel SNR. To do so, we first synthetically inject random noise to the original constellation symbols. (The power of synthetic noise is calculated according to the SNR of the communication link at each transmission slot.) The synthetically-noisy samples are then fed into the DDPM, and the reverse diffusion process is run to denoise and generate constellation symbols. Finally, the distribution of the symbols generated at the output of the DDPM block is considered as the probabilistic model of constellations. One can follow the detailed mathematical framework and the step-by-step algorithms from [11].

Fig. 4 demonstrates the mutual information metric between the generated symbols at the transmitter and the reconstructed ones at the receiver. The mutual information metric can be interpreted as the quantitative measure to study the “mutual similarity” among communication parties as discussed in Section III-B. For this experiment, we consider both cases of additive white Gaussian noise (AWGN) and also non-Gaussian noise to study the OOD performance of our approach. For the benchmark, we consider a DNN model with trainable constellation layer and neural demapper [14]. The figure highlights the performance of the DDPM-based scheme. Notably, our scheme achieves mutual information of around 1.25 bits for 64-QAM geometry, and 1 bit for 16-QAM geometry, respectively. However, the DNN benchmark does not show any noticeable performance in SNR ranges below −5 dB, even for 16-QAM geometry (which is supposed to be less prone to errors and mismatches than 64-QAM case). In addition, a threefold improvement is achieved compared to DNN benchmark at 0 dB SNR. We further study the scenario of non-Gaussian noise. Remarkably, although we do not re-train the DDPM with non-Gaussian distributions, the performance of our approach does not change under the non-Gaussian assumption, which is not seen during training. This highlights the OOD performance of the DDPM scheme.

V. FUTURE DIRECTIONS & OPEN ISSUES

For the development of WiGenAI, much remains to be investigated with respect to both practical aspects and theoretical models and analysis. Accordingly, the following future directions can be identified.

1) Diffusion Models for Integrated Sensing & Communications: As a new paradigm in wireless networks, the conventionally-competing objectives of sensing and communication can be jointly designed and optimized via ISAC frameworks. It proposes to exploit a shared hardware platform and a joint signal processing design for both communication and sensing functionalities. For this application, diffusion models can be regarded as a promising solution that can be exploited to learn the posterior distribution of either the communication components, e.g., the wireless channels, or sensing components, e.g., users’ locations, using noisy pilot signals observed as input. For example, diffusion models can be employed at the receiver to extract sensing parameters out of the received ISAC signals for the purpose of situational awareness analysis.

2) GenAI for Reconfigurable Intelligent Surface (RIS)-Aided Communications: The goal of RIS is to make the wire-less environment (the medium and/or channel) controllable towards a flexible network design. Initial findings proposed in [5] have shown that diffusion-based channel modeling is able to learn the end-to-end wireless channels in simple scenarios. Nevertheless, RIS-aided communication channels consist of three components, i.e., transmitter-to-receiver, transmitter-to-RIS, and RIS-to-receiver links. Accordingly, the non-ideal characteristics of RIS-aided communications depend on inherent channel responses of these three components. More-over, RIS is a passive element, which cannot necessarily obtain the responses of TX-to-RIS and RIS-to-RX channels via traditional channel estimation methods. In this regard, diffusion models are a potential solution for dealing with the RIS-assisted communication channels, either to learn the wireless links, or to compensate for RIS-based non-idealities. In addition, diffusion models can be employed to learn the fast time-varying, reconfigurable environment, e.g., via generating synthetic realizations of wireless channels.

3) Diffusion Models for Digital Twins & Metaverse: Merging the physical world, the digital world, and the virtual world, state-of-the-art generative models, such as diffusion models, can help facilitate processing of the massive amount of real world data with GenAI to create replicas of the real world, known as digital twins. In particular, GenAI can be a promising solution for digital twin networks. Notably, by exploiting GenAI we can reduce the amount of information needed to be conveyed from the multi-sensory transmitter to the receiver for building the digital twin. This is due to the fact that the generative models are capable of accurately learning and modeling highly-structured data (such as 3D holograms or higher-dimensional objects) corresponding to the physical assets. Moreover,GenAI can also be incorporated into the Metaverse systems. Taking different modalities, such as text, images, sounds, animation, and 3D models as input, GenAI algorithms, such as diffusion models, have the ability to provide metaverse users with diverse contents, realizing immersive experiences. This is an important and timely direction that needs further investigations from both communication and AI perspectives.

4) Distributed Diffusions: In addition to time and energy consumption challenges, training generative models on large-scale datasets can pose challenges in terms of data privacy and accessibility. One solution to deal with this problem is to incorporate diffusion models into federated learning (FL) framework, as a promising privacy-aware approach for collaborative training. Employing distributed diffusion models over wireless networks can cause some practical challenges, including the overhead of large-scale model parameter computation and transmission over the network. Hence, an important direction is to investigate computation-and communication-efficient network designs to support the distributed deployment of diffusion models over wireless net-works. In this regard, hardware limitation is another challenge which needs further investigation. Specifically, although on-device federated training of diffusion models can enhance model personalization, diffusion models can contain over a billion parameters, which makes it challenging to be deployed in resource-constrained edge networks. Possible solutions to this problem could be neural model optimization, pruning, and compression within the diffusion framework.

5) Theoretical Directions for WiGenAI: With respect to theoretical aspects, further in-depth research and analysis is required to develop new metrics, as well as diffusion model-based communication frameworks with formal guarantees for the performance assessment of WiGenAI. For example, context-aware mathematical metrics are required to assess the resilience and robustness of GenAI-based solutions in wireless applications.

6) Practical Testbeds & Proof-of-Concept (PoC) Studies: There exists an imperative need for further research to under-stand how to implement evidence-based solutions under real-world settings, or to identify barriers towards implementing GenAI-based protocols. In other words, practical solutions and proof-of-concepts(PoC) studies on GenAI-based communication systems are needed to be designed and tested before the potential large-scale development of WiGenAI.

Existing Challenges: To highlight the existing challenges in the development of WiGenAI, we mention that diffusion models suffer from a drawback with respect to the long run-time for training and sampling. This can make it challenging to develop diffusion-based algorithms for the wireless communication systems with stringent latency requirements. As elaborated in Section II, diffusion models learn to noise and denoise the data through a fixed number of time-steps within the training and inference. Although this potentially large number of steps results in generating high-quality data samples, it also comes with computation complexity and energy consumption. Therefore, how to make the training and inference of diffusion models robust, efficient, and sustainable is an important question that should be addressed in the context of wireless networks. This challenge is particularly important when it comes to on-device learning with memory-constrained edge devices in wireless networks.

The hallucination effect of generative models is another important challenge that should be considered for the development of GenAI-based communication systems. The hallucination effect corresponds to a phenomenon in which generative models provide responses with fabricated data that are not real and do not match the expected/identified patterns. Generating such faulty results, e.g., by generating constellation points that are not adhering to the networking standards, can significantly affect the network trust. Finally, when it comes to the large-scale development of GenAI-based systems, another important challenge corresponds to the data management, model update and sharing, and life-cycle management of employing generative models for communication networks that should be addressed in the context of intelligent network design.

VI. CONCLUSIONS

In this article, we have introduced WiGenAI, unleashing the magic roles of GenAI in wireless communications. First, diffusion models were introduced as the state-of-the-art family of generative models. Insightful visions and directions on the applications of diffusion models in wireless communication systems were also discussed to delve into the WiGenAI paradigm. Then, we have conducted two case studies, show-casing how state-of-the-art models in GenAI can be incorporated into AI-native communication systems. Specifically, we have proposed DDPM for finite precision hardware-impaired communication, highlighting resilient performance. We have also studied DDPMs for constellation shaping to shed light on one of the applications of GenAI in PHY communication engineering at the transmitter. Finally, we have discussed future directions and open issues to promote future research endeavors towards WiGenAI.

Our findings in this research can be leveraged for the development of a new paradigm in AI-native wireless systems, i.e., WiGenAI, in which the communication frameworks, as well as the practical protocols can be revisited by introducing GenAI algorithms, such as diffusion models and its variants. Our findings and research implications in this article are crucial for a wide range of players in telecommunication industry, including the network designers, industrial stake-holders, and also policy makers to ensure that the merits of GenAI for wireless systems are fully understood when it comes to driving market-based competition.

Matti Latva-aho is a professor at the University of Oulu on wireless communications and Director for National 6G Flagship Programme. He is also a Global Fellow (visiting professor) with The University of Tokyo. Prof. Latva-aho has published over 600 conference or journal papers in the field of wireless communications. He received Nokia Foundation Award in 2015 for his achievements in mobile communications research.

REFERENCES

[1] J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851, 2020.

[2] S. Ali, et al., “6G white paper on machine learning in wireless communication networks,” arXiv preprint arXiv:2004.13875, 2020.

[3] B. Levac, A. Jalal, K. Ramchandran, and J. I. Tamir, “MRI reconstruc-tion with side information using diffusion models,” arXiv:2303.14795, Jun. 2023. [Online]. Available: https://arxiv.org/abs/2303.14795.

[4] Y. Liu, H. Du, D. Niyato, J. Kang, Z. Xiong, D. I. Kim, A. Jamalipour, “Deep generative model and its applications in efficient wireless network management: A tutorial and case study,” arXiv preprint arXiv: 2303.17114, Mar. 2023.

[5] M. Kim, R. Fritschek, and R. F. Schaefer, “Learning end-to-end channel coding with diffusion models,” 26th International ITG Workshop on Smart Antennas and 13th Conference on Systems, Communications, and Coding (WSA & SCC 2023), Braunschweig, Germany, Feb. 27 – Mar. 3, 2023, pp. 1–6.

[6] T. Wu, Z. Chen, D. He, L. Qian, Y. Xu, M. Tao, W. Zhang, “CDDM: Channel denoising diffusion models for wireless communications,” arXiv preprint arXiv: 2305.09161, May 2023.

[7] M. Letafati, S. Ali, M. Latva-aho, “Denoising diffusion probabilis-tic models for hardware-impaired communications,” arXiv preprint arXiv:2309.08568, Oct. 2023.

[8] X. Niu, X. Wang, D. Gu¨ndu¨z, B. Bai, W. Chen, and G. Zhou, “A hybrid wireless image transmission scheme with diffusion,” arXiv preprint arXiv: 2308.08244, Aug. 2023.

[9] M. Arvinte and J. I. Tamir, “MIMO channel estimation using score-based generative models,” IEEE Trans. Wireless Commun., vol. 22, no. 6, pp. 3698–3713, Jun. 2023.

[10] T. Jayashankar, G. C.F. Lee, A. Lancho, A. Weiss, Y. Polyanskiy, and G. W. Wornell, “Score-based source separation with applications to digital communication signals,” arXiv preprint arXiv: 2306.14411, Jun. 2023.

[11] M. Letafati, S. Ali, M. Latva-aho, “Probabilistic constellation shaping with denoising diffusion probabilistic models: A novel approach,” arXiv preprint arXiv:2309.08688, Sep. 2023.

[12] C. Chaccour, W. Saad, M. Debbah, Z. Han, and H. V. Poor, “Less data, more knowledge: Building next generation semantic communication networks,” arXiv preprint arXiv: 2211.14343, Nov. 2022.

[13] F. A. Aoudia and J. Hoydis, “Model-free training of end-to-end commu-nication systems,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 11, pp. 2503–2516, Nov. 2019.

[14] M. Stark, F. A. Aoudia, and J. Hoydis, “Joint learning of geometric and probabilistic constellation shaping,” 2019 IEEE Globecom Workshops (GC Wkshps), Waikoloa, HI, USA, 2019, pp. 1–6.

[15] E. Bjo¨rnson, J. Hoydis, M. Kountouris, and M. Debbah, “Massive MIMO systems with non-ideal hardware: Energy efficiency, estimation, and capacity limits,” IEEE Transactions on Information Theory, vol. 60, no. 11, pp. 7112–7139, Nov. 2014.

Mehdi Letafati received his B.Sc and M.Sc. degrees in Electrical Engineer-ing from the Sharif University of Technology, Tehran, Iran. He is currently a doctoral researcher at the Center for Wireless Communications, University of Oulu, Finland. His current research interests lie at the intersection of machine learning and wireless communications. He is also interested in information theory, data science, and enabling technologies for the Metaverse.

Samad Ali received his Ph.D. in wireless communications engineering from the University of Oulu, Finland. He is currently a senior research specialist at Nokia and a postdoctoral researcher at the University of Oulu. His main research interest is the applications of AI/ML in wireless communication networks.