404.. you know the drill
Looks like you found the edge of the matrix.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Looks like you found the edge of the matrix.
About me
This is a page not in th emain menu
Published:
In case you need more processing power to run your model, it might be a good option to use some cloud infrastructure. During my H-1B project I gave it a try on AWS. Although I ended up training the model locally for various reasons, having a quick guide on how to implement the notebook in AWS proved to be helpful.
Published:
When I began researching this topic towards the end of 2013, I sensed a certain skepticism from the scientific community, particularly when people with different backgrounds started experimenting between disciplines, which can reveal new vectors of IT security attacks.
Published:
FAME: Framework for Adversarial Malware Evaluation. This work aims to evaluate adversarial attacks and increase the resilience of malware classifiers against adaptive adversaries.
Published:
This projects aims to train a classifier, based on a dataset provided by the US Department of Labor, to be able to predict whether a visa request would be granted eligibility to the H-1B program. This is the Capstone Project for Udacity’s Machine Learning Engineer Nanodegree.
Published in Universidad de Buenos Aires Digital Library, 2016
Malware coded synthetic genomes have caused skepticism within the scientific community but new research might help to change that perception.
Published in IEEE 9th International Conference on the Network of the Future (NoF), 2018
In recent years, emerging technologies such as the Internet of Things gain increasing interest in various communities. However, the majority of IoT devices have little or no protection at software level. The goal of this paper is to present an IoT botnet detection and isolation approach at the level of access routers that makes IoT devices more attack resilient.
Published in IEEE 5th International Conference on Information Management (ICIM), 2019
Modifying existing malicious software until malware scanners misclassify it as clean is an attractive technique for cybercriminals. We propose ARMED - Automatic Random Malware Modifications to Evade Detection.
Published in IEEE 40th Symposium on Security and Privacy (S&P), 2019
Detecting new malware using machine learning has been increasingly used lately, yet recent research has proven that deep neural networks report unexpected behavior when confronted with adversarial examples. We designed an approach using GAN to generate malware adversarial examples.
Published in IEEE 18th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), 2019
Genetic Programming (GP) has previously proved to achieve valuable results on the fields of image processing and arcade learning. Thus, AIMED - Automatic Intelligent Malware Modifications to Evade Detection - was designed and implemented using genetic algorithms to evade malware classifiers.
Published in ACM 26th Conference on Computer and Communications Security (CCS), 2019
Machine learning has proved to be a promising technology to determine whether a piece of software is malicious or benign. In this work, we present a gradient-based approach that can carefully generate valid executable malicious files that are able to bypass classifiers.
Published in ACM 27th Conference on Computer and Communications Security (CCS), MTD Workshop, 2020
MTD represents a way of defending network systems on different levels by shifting the different surfaces of the protected environment. Most approaches have only been evaluated theoretically and comparisons are still lacking. Hence, we developed a hybrid platform that evaluates such techniques with additional features such as connection tracker with fingerprinting service and a honeypot module, which is helpful to bypass attackers attempts.
Published in Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD), 2021
We present AIMED-RL, Automatic Intelligent Malware modifications to Evade Detection using Reinforcement Learning. Our approach is able to generate adversarial examples that lead machine learning models to misclassify malware files, without compromising their functionality. We implement our approach using a Distributional Double Deep Q-Network agent, adding a penalty to improve diversity of transformations. Thereby, we achieve competitive results compared to previous research based on reinforcement learning while minimizing the required sequence of transformations.
Published in arXiv preprint arXiv:2102.06747, 2022
Machine learning classification models are vulnerable to adversarial examples -- input-specific perturbations that can manipulate the models output. Universal Adversarial Perturbations (UAPs), which identify noisy patterns that generalize across the input space, allow the attacker to greatly scale up the generation of these adversarial examples. While UAPs have been explored in application domains beyond computer vision, little is known about their implications in the malware domain, where attackers must reason about satisfying challenging problem-space constraints. Therefore, we explore the challenges and strengths of UAPs in the context of malware classification.
Published in Springer Nature, 2023
Machine learning has become key in supporting decision-making processes across a wide array of applications, ranging from autonomous vehicles to malware detection. However, while highly accurate, these algorithms have been shown to exhibit vulnerabilities, in which they could be deceived to return preferred predictions.