All Submissions

Veronika Samborska

Review of the major studies discussing adversarial attacks and defences

Heng-Jui Chang

An overview of recent advancements in data augmentation for automatic speech recognition.

CHERHYKALO DENYS

In this chapter, we analyze current advances in AI development and their biological implications. I believe that it will be possible not only to convey information more easily, but also to create new ideas of improvement of modern algorithms using AI.

Roman Böhringer

This review gives an introduction to Causal Machine Learning with a focus on healthcare and the issues that are faced there. Several recent papers and research ideas in this area are presented.

David McCaffary

Critical appraisal of prominent current approaches to alleviating catastrophic forgetting in neural networks, drawing on inspiration from neuroscience.

Kaichao You

Transfer learning is a popular method in the deep learning community, but it is usually implemented naively (eg. copying weights as initialization). Co-Tuning is a recently proposed technique to improve transfer learning that is easy to implement, and effective to a wide variety of tasks.

Nickil Maveli

The widespread use of black-box models in AI has increased the need for explanation methods that reveal how these mysterious models arrive at concrete decisions. We will describe the problem, prominent solutions, and example applications for each of these approaches, as well as their vulnerabilities and flaws. We hope to have a enriching and an informative introduction to post-hoc machine learning explainability.

Shaoxiong Ji and Teemu Saravirta and Shirui Pan and Guodong Long and Anwar Walid

Federated learning is a new learning paradigm that decouples data collection and model training via multi-party computation and model aggregation. As a flexible learning setting, federated learning has the potential to integrate with other learning frameworks. We conduct a focused survey of federated learning in conjunction with other learning algorithms.

Gabriel Bénédict

Generative Adversarial Networks, their variants and their evaluation

Pierre Orhan

A geometrical perspective proves efficient in developing machine learning tools for computational neuroscience.

Guy Leroy

A new family of reinforcement learning algorithms, Go-Explore, surpasses all previous approaches on hard-explore Atari games by addressing detachment and derailment.

Jintang Li

This review gives an introduction to Adversarial Machine Learning on graph-structured data, including several recent papers and research ideas in this field. This review is based on our paper "A Survey of Adversarial Learning on Graph".

Devi Sandeep Endluri, Chinmayee Rane

This review introduces Handwriting Text Recognition (HTR), then mentions the different group of approaches for HTR, and finally summarizes the latest research in OCR techniques for offline handwritten recognition on documents.

Srishti Saha

This review gives a comprehensive study of application of information theory in Machine Learning methods and algorithms.

Tyler Darwin

This paper introduces a principled clustering objective based on maximizing Mutual Information (MI) between paired data samples under a bottleneck, equivalent to distilling their shared abstract content (co-clustering), that tends to avoid degenerate clustering solutions.

Christopher Dzuwa

Deep learning relies on the availability of a large corpus of data (labeled or unlabeled). Thus, one challenging unsettled question is: how to train a deep network on a relatively small dataset? To tackle this question, Ahmed Taha, Abhinav Shrivastava, Larry Davis proposed an evolution-inspired training approach to boost performance on relatively small datasets. This article gives a detailed summary of their paper, “Knowledge evolution in neural networks”

Anish Ghosh, Bivek Panthi, Sishir Sunar

We start by explaining about how handwritten mathematical expressions have unstable scale. Then we show how augmented layers are used for scaling those mathematical expression. We continue by explaining how an attention based encoder-decoder is used for extracting features and generating predictions. The drop attention is used when the attention distribution of the decoder is not precise. This method achieves better performance than any other existing method.

Rachel Wang

As quantum systems become increasingly complex, optimisation algorithms are becoming a requirement for high-precision experiments. Machine-learning online optimisation offers an alternative to theoretical models, relying instead on experimental observations to continuously update an internal surrogate model. Two online optimisation techniques are reviewed in this paper in the context of evaporative cooling for the efficient and high-quality production of Bose-Einstein condensates (BEC). These two methods prioritise different stages of cooling with one focused on optimising experimental settings and the other on improving image acquisition.

Alpay Sabuncuoğlu

This article reviews the recent aproaches and datasets in the deep generative sketch modelling field that takes the human-machine creative collaboration one step closer.

Rony Abecidan

The bulk of machine learning models have a tendancy to rely too strongly to the distribution of the data on which they have been trained. Through this review paper I propose to discuss about ways to design an image classifier able to generalize well on a different but related distribution from its training one.

Aniket Agarwal*, Ayush Mangal*, Vipul*

This review gives an introduction to Scene Graphs and their usage in various downstream tasks. Many of the recent methods for its generation have been discussed here in detail along with a detailed comparison between them.