Posts by Collection




FAME: Framework for Adversarial Malware Evaluation. This work aims to evaluate adversarial attacks and increase the resilience of malware classifiers against adaptive adversaries.

Predicting H-1B visa eligibility in the US


This projects aims to train a classifier, based on a dataset provided by the US Department of Labor, to be able to predict whether a visa request would be granted eligibility to the H-1B program. This is the Capstone Project for Udacity’s Machine Learning Engineer Nanodegree.


Malware Coded into Synthetic Genomes

Published in Universidad de Buenos Aires Digital Library, 2016

Malware coded synthetic genomes have caused skepticism within the scientific community but new research might help to change that perception.

IoT-Botnet Detection and Isolation by Access Routers

Published in IEEE 9th International Conference on the Network of the Future (NoF), 2018

In recent years, emerging technologies such as the Internet of Things gain increasing interest in various communities. However, the majority of IoT devices have little or no protection at software level. The goal of this paper is to present an IoT botnet detection and isolation approach at the level of access routers that makes IoT devices more attack resilient.

ARMED: How Automatic Malware Modifications Can Evade Static Detection?

Published in IEEE 5th International Conference on Information Management (ICIM), 2019

Modifying existing malicious software until malware scanners misclassify it as clean is an attractive technique for cybercriminals. We propose ARMED - Automatic Random Malware Modifications to Evade Detection.

Training GANs to Generate Adversarial Examples Against Malware Classification

Published in IEEE 40th Symposium on Security and Privacy (S&P), 2019

Detecting new malware using machine learning has been increasingly used lately, yet recent research has proven that deep neural networks report unexpected behavior when confronted with adversarial examples. We designed an approach using GAN to generate malware adversarial examples.

AIMED: Evolving Malware with Genetic Programming to Evade Detection

Published in IEEE 18th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), 2019

Genetic Programming (GP) has previously proved to achieve valuable results on the fields of image processing and arcade learning. Thus, AIMED - Automatic Intelligent Malware Modifications to Evade Detection - was designed and implemented using genetic algorithms to evade malware classifiers.

Attacking Malware Classifiers by Crafting Gradient-Attacks that Preserve Functionality

Published in ACM 26th Conference on Computer and Communications Security (CCS), 2019

Machine learning has proved to be a promising technology to determine whether a piece of software is malicious or benign. In this work, we present a gradient-based approach that can carefully generate valid executable malicious files that are able to bypass classifiers.

OpenMTD: A Framework for Efficient Network-Level MTD Evaluation

Published in ACM 27th Conference on Computer and Communications Security (CCS), MTD Workshop, 2020

MTD represents a way of defending network systems on different levels by shifting the different surfaces of the protected environment. Most approaches have only been evaluated theoretically and comparisons are still lacking. Hence, we developed a hybrid platform that evaluates such techniques with additional features such as connection tracker with fingerprinting service and a honeypot module, which is helpful to bypass attackers attempts.

AIMED-RL: Exploring Adversarial Malware Examples with Reinforcement Learning

Published in Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD), 2021

We present AIMED-RL, Automatic Intelligent Malware modifications to Evade Detection using Reinforcement Learning. Our approach is able to generate adversarial examples that lead machine learning models to misclassify malware files, without compromising their functionality. We implement our approach using a Distributional Double Deep Q-Network agent, adding a penalty to improve diversity of transformations. Thereby, we achieve competitive results compared to previous research based on reinforcement learning while minimizing the required sequence of transformations.

Realizable Universal Adversarial Perturbations for Malware

Published in arXiv preprint arXiv:2102.06747, 2022

Machine learning classification models are vulnerable to adversarial examples -- input-specific perturbations that can manipulate the models output. Universal Adversarial Perturbations (UAPs), which identify noisy patterns that generalize across the input space, allow the attacker to greatly scale up the generation of these adversarial examples. While UAPs have been explored in application domains beyond computer vision, little is known about their implications in the malware domain, where attackers must reason about satisfying challenging problem-space constraints. Therefore, we explore the challenges and strengths of UAPs in the context of malware classification.