We are primarily interested in developing a concrete mathematical framework for bug-free algorithm design in security as well as in studying security protocols from communication, coding and information theoretic view points. We utilize tools from these well founded areas to design and analyze security protocols as well as identify bugs that may go undetected otherwise.

Our Current Projects

Learn to Reason: Feedback-Driven Learn to Reason in Adversarial Environments for Autonomic Cyber Systems

The growing complexity of cyber systems has made them difficult for human operators to defend, particularly in the presence of intelligent and resourceful adversaries who target multiple system components simultaneously, employ previously unobserved attack vectors, and use stealth and deception to evade detection. There is a need for developing autonomic cyber systems that can integrate statistical learning and rules-based formal reasoning to provide an adaptive and robust situational awareness and resilient system response. In this collaborative research effort, we propose to develop a feedback-driven Learn to Reason (L2R) framework, which aims to integrate statistical learning with formal reasoning, in adversarial environments. Our insight is that in order to realize the potential benefits of L2R, continuous interaction between the statistical and formal components is needed, both at intermediate time steps and at multiple layers of abstraction.


Learn more about Learn to Reason project (L2RAVE)

Learn to Reason: Feedback-Driven Learn to Reason in Adversarial Environments for Autonomic Cyber Systems

The growing complexity of cyber systems has made them difficult for human operators to defend, particularly in the presence of intelligent and resourceful adversaries who target multiple system components simultaneously, employ previously unobserved attack vectors, and use stealth and deception to evade detection. There is a need for developing autonomic cyber systems that can integrate statistical learning and rules-based formal reasoning to provide an adaptive and robust situational awareness and resilient system response. In this collaborative research effort, we propose to develop a feedback-driven Learn to Reason (L2R) framework, which aims to integrate statistical learning with formal reasoning, in adversarial environments. Our insight is that in order to realize the potential benefits of L2R, continuous interaction between the statistical and formal components is needed, both at intermediate time steps and at multiple layers of abstraction.


Learn more about Learn to Reason project (L2RAVE)

Filter Research data by category: