MLSEC LAB

Machine Learning & Security Laboratory

Independent academic research initiative — doctoral orientation

OFFICIAL ANNOUNCEMENT

The former commercial entity EUROCYBERSECURITE has ceased operations. MLSEC Lab is now exclusively dedicated to independent scientific research in machine learning security and advanced cybersecurity.

Research Areas

Machine Learning Security

Study of vulnerabilities, robustness limits, and attack surfaces in modern machine learning systems.

  • Adversarial examples
  • Model poisoning and data poisoning
  • Robust and certified learning

Cybersecurity & AI

Application of AI models to real-world security problems under adversarial constraints.

  • Network intrusion detection systems (NIDS)
  • Malware behavior modeling
  • Threat intelligence automation

Trustworthy & Explainable AI

Ensuring transparency, accountability, and interpretability of security-oriented AI systems.

  • Model interpretability
  • Explainable anomaly detection
  • Decision traceability

Research Articles & Technical Notes

Adversarial Machine Learning in Network Intrusion Detection

Abstract. Machine learning-based intrusion detection systems have demonstrated strong performance in controlled environments. However, when deployed in adversarial contexts, these systems become vulnerable to carefully crafted inputs designed to evade detection.

Background. Traditional NIDS rely on static signatures, whereas ML-based systems learn statistical representations of traffic patterns. This learning process introduces new attack vectors, including evasion and poisoning.

Technical Approach. Our research analyzes gradient-based evasion attacks, feature-space manipulation, and transferability across models trained on network flow datasets.

Security Implications. Results demonstrate that high detection accuracy does not imply robustness, highlighting the need for adversarial evaluation pipelines.

Future Work. Development of hybrid detection models combining symbolic rules with robust learning techniques.

Data Poisoning Attacks Against Security-Oriented ML Pipelines

Abstract. Data collection is a critical phase in security machine learning. Poisoned training data can silently compromise detection systems.

Background. Security datasets are often collected from uncontrolled environments, making them susceptible to adversarial manipulation.

Methodology. We evaluate label-flipping, backdoor injection, and clean-label poisoning attacks on supervised and semi-supervised models.

Findings. Even minimal poisoning ratios can significantly degrade detection performance while remaining statistically undetectable.

Explainable AI for Cybersecurity Decision-Making

Abstract. Security analysts require interpretable alerts to validate automated decisions produced by AI systems.

Contribution. This work explores feature attribution methods applied to anomaly detection models, improving analyst trust and operational usability.

Challenges. Explainability techniques may themselves leak sensitive information or be exploited by attackers.

Toward Autonomous Defensive Systems

Overview. Autonomous defense systems aim to detect, analyze, and respond to cyber threats with minimal human intervention.

Research Focus. Reinforcement learning, online learning, and safe decision-making under uncertainty are investigated.

Open Problems. Safety guarantees, explainability, and alignment with human operators remain major research challenges.

Methods & Research Framework

Secure Data Engineering

Dataset validation, anomaly inspection, and adversarial risk assessment prior to training.

Robust Model Training

Adversarial training, regularization, and robustness evaluation beyond standard accuracy metrics.

Deployment & Monitoring

Continuous monitoring of model behavior, drift detection, and post-deployment security evaluation.

MLSEC Lab positions security as a first-class constraint in machine learning research, not an afterthought.