Feedback-Driven Learn to Reason in Adversarial Environments for Autonomic Cyber Systems
The growing complexity of cyber systems has made them difficult for human operators to defend, particularly in the presence of intelligent and resourceful adversaries who target multiple system components simultaneously, employ previously unobserved attack vectors, and use stealth and deception to evade detection. There is a need for developing autonomic cyber systems that can integrate statistical learning and rules-based formal reasoning to provide an adaptive and robust situational awareness and resilient system response. In this collaborative research effort, we propose to develop a feedback-driven Learn to Reason (L2R) framework, which aims to integrate statistical learning with formal reasoning, in adversarial environments. Our insight is that in order to realize the potential benefits of L2R, continuous interaction between the statistical and formal components is needed, both at intermediate time steps and at multiple layers of abstraction.