Publikasjoner
-
Ali Ahmed, Awadelrahman Mohamedelsadig; Eliassen, Frank & Zhang, Yan
(2023).
Combinatorial Auctions and Graph Neural Networks for Local Energy Flexibility Markets,
Proceedings of 2023 IEEE PES Innovative Smart Grid Technologies Europe (ISGT-Europe), 23-23 Octorber, 2023.
IEEE conference proceedings.
ISSN 979-8-3503-9678-2.
s. 1–6.
doi:
10.1109/ISGTEUROPE56780.2023.10407292.
Vis sammendrag
This paper proposes a new combinatorial auction framework for local energy flexibility markets, which addresses the issue of prosumers’ inability to bundle multiple flexibility time intervals. To solve the underlying NP-complete winner determination problems, we present a simple yet powerful heterogeneous tri-partite graph representation and design graph neural network-based models. Our models achieve an average optimal value deviation of less than 5% from an off-the-shelf optimization tool and show linear inference time complexity compared to the exponential complexity of the commercial solver. Contributions and results demonstrate the potential of using machine learning to efficiently allocate energy flexibility resources in local markets and solving optimization problems in general.
-
Ali Ahmed, Awadelrahman Mohamedelsadig & M. Ali, Leen A.
(2021).
Explainable Medical Image Segmentation via Generative Adversarial Networks and Layer-wise Relevance Propagation.
Nordic Machine Intelligence (NMI).
ISSN 2703-9196.
1(MedAI: Transparency in Medical Image Segmentation),
s. 20–22.
doi:
10.5617/nmi.9126.
-
Mohamedelsadig Ali Ahmed, Awadelrahman
(2020).
Generative Adversarial Networks for Automatic Polyp Segmentation.
CEUR Workshop Proceedings.
ISSN 1613-0073.
Vis sammendrag
This paper aims to contribute in bench-marking the automatic
polyp segmentation problem using generative adversarial networks
framework. Perceiving the problem as an image-to-image translation task, conditional generative adversarial networks are utilized
to generate masks conditioned by the images as inputs. Both generator and discriminator are convolution neural networks based.
The model achieved 0.4382 on Jaccard index and 0.611 as F2 score
-
Mohamedelsadig Ali Ahmed, Awadelrahman; Zhang, Yan & Eliassen, Frank
(2020).
Generative Adversarial Networks and Transfer
Learning for Non-Intrusive Load Monitoring in
Smart Grids,
2020 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm).
IEEE International Conference on Smart Grid Communications (SmartGridComm).
ISSN 978-1-7281-6127-3.
doi:
10.1109/SmartGridComm47815.2020.9302933.
Vis sammendrag
Abstract—Non-intrusive load monitoring (NILM) objective is
to disaggregate the total power consumption of a building into individual appliance-level profiles. This gives insights to consumers to efficiently use energy and realizes smart grid efficiency outcomes. While many studies focus on achieving accurate models, few of them address the models generalizability. This paper proposes two approaches based on generative adversarial networks
to achieve high-accuracy load disaggregation. Concurrently, the paper addresses the model generalizability in two ways, the first is by transfer learning by parameter sharing and the other is by learning compact common representations between source and target domains. This paper also quantitatively evaluate the worth
of these transfer learning approaches based on the similarity between the source and target domains. The models are evaluated on three open-access datasets and outperformed recent machine learning methods.
Se alle arbeider i Cristin
-
Mohamedelsadig Ali Ahmed, Awadelrahman
(2021).
An Empirical Analysis of Transfer Learning for Generative Adversarial Networks.
Vis sammendrag
Generative adversarial networks framework immensely attracted the machine learning community attention in the recent years. GANs have succeeded in generating realistic-looking data from noise and have countless applications. On the other hand, Transfer Learning is a smart way that aims to enhance machine learning models in the case of scarce data. The main intuition behind transfer learning is to train models on a plenty of data that is “not” our targeted one but has some similar attributes to it. Then transfer learning in models that contains single networks, such as CNNs, are implemented by sharing the first layers (from the input side) of the model between source and target domains and initialize other layers then retrain the model on our scarce target data. This allows the model to have a good start when learning on the target data and achieve better results than if we trained it from scratch on the target. Now, GANs framework consists of two networks, a generator and a discriminator, and a legit question here is: which network and which part of that network contains the transferable features and hence parameter? Should we fine-tune or freeze the shared parameters? I am trying to answer this specific and direct question by showing imperial results of a series of experiments that I designed based on conditional GANs framework.
Se alle arbeider i Cristin
Publisert
15. feb. 2019 08:32
- Sist endret
25. juni 2021 01:04