• Home
  • Attack
  • Detect
  • Predict
  • Dataset
  • Contact Us
        • T1-24–01–S–N–CL
        • T2-24–01–S–N–CL
        • T3-24–01–S–N–CL
        • T4-24–01–S–E–M
        • T5-24–01–S–E–LM
        • T6-24–01–S–E–FH
        • T7-24–01–M–NE–CLM
        • T8-24–01–M–NE–CFHL
        • T9-24–01–M–NE–CLM
        • T1-24–02–S–N–CIKM
        • T2-24–02–S–N–CL
        • T3-24–02–S–N–CL
        • T4-24-02-S-E-M
        • T5-24-02-S-E-DL
        • T6-24-02-S-E-DEGN
        • T7-24-02-M-NE-CDEGLN
        • T8-24-02-M-NE-CDL
        • T9-24-02-M-NE-CLH
        • T1-25–01–S–N–CD
        • T2-25–01–S–N–CL
        • T3-25–01–S–N–CD
        • T4-25-01-S-E-FH
        • T5-25-01-S-E-CL
        • T6-25-01-S-E-CL
        • T7-25-01-M-NE-CDN
        • T8-25-01-M-NE-CLFH
        • T9-25-01-M-NE-CDFH
      • Model Description
      • Explainable AI
  • Making AI-Based Detection Transparent and Trustworthy
  • Explainable AI (XAI) is a core component of T9 Detect, designed to enhance the transparency and reliability of AI-driven malicious traffic detection. Rather than operating as a black-box system, T9 provides interpretable insights into how detection decisions are made.

    By exposing key factors that influence each decision, XAI enables analysts to understand why a specific flow or session was classified as malicious, supporting informed validation and investigation.

  • Key Goals of XAI (for Malicious Network Detection)
    • Traffic-Level Transparency
    • Revealing which traffic characteristics—such as payload patterns, flow statistics, or session behaviors—most strongly influenced the detection decision.
    • Behavior-Oriented Interpretability
    • Enabling analysts to interpret model outputs in terms of concrete network behaviors rather than abstract model features.
    • Detection Confidence and Validation
    • Supporting analyst-driven validation by explaining why a specific flow or session was classified as malicious, including borderline or uncertain cases.
    • Operational Accountability
    • Providing traceable and reviewable explanations that can be used for incident reporting, rule refinement, and responsible operation of AI-based network defense systems.


  • Figure Key goals of XAI in malicious network detection: transparency, interpretability, trust, and accountability.
  • Copyright(C) 2024, KAIST Cyber Security Reserch Center. All Rights Reserved.