GA ISSA Talk: People > Machines
I am looking forward to speaking at the Georgia Annual ISSA Meeting on 11/15. The blog series that the talk is based on is below.
I am looking forward to speaking at the Georgia Annual ISSA Meeting on 11/15. The blog series that the talk is based on is below.
One of the areas we research heavily at WitFoo is how to reduce the number of investigations our customers have to perform each day. Internally, we call this the “n” problem. Another area of focus is how to reduce the amount of time our customers spend on each investigation. We refer to this as the “t” problem. The lower we drive n and t, the more work our customers can accomplish each day.
Better detection mechanisms through algorithms (code) & machine learning (pattern recognition) are valuable tools to the human responders. Playbook Automation can reduce the routine and certain tasks an analyst must perform so she can focus on what is important.
Playbook automation collects data from different security and logging tools and makes decisions on behalf of the incident responder.
Computer scientists love the idea of artificial intelligence (AI). It is the centerpiece of many mainstream science fiction works. It’s also a preferred buzzword of lazy vendors and marketers. Until computers can convince (trick) a reasonable human being that they are living beings (Turing test) all claims of AI are misleading at best. In this installment, I won’t debunk the types of claims of AI. We will examine the difference between how computers and humans think and the implications of the differences.
When I was learning how to troubleshoot and repair electronics in the Navy, I would sometimes challenge one of the instructors on how something worked. If I delved into a complicated subject I was often told it worked on “FM” which meant f***ing magic. That rarely stopped me however, and I often found the concepts were not overly complicated, just not directly relevant to my training.
There is some FM in information security that I’d like to demystify as we examine how tools can enable and not hinder the craft. We’ll examine algorithms and machine learning in this installment.
Cybersecurity Incident Response has only been a part of human history for a couple of decades. Over the short course of time, industry leaders, analysts and vendors have put a heavy focus on the importance of technology solving problems within the craft. In this series, we will examine the preeminent importance of the craftsman over his tools and the role tools should play in making the world safer.
Fail fast. It’s one of the Agile buzz phrases that gets thrown around a lot in software product organizations these days. Particularly, organizations trying to embrace the Lean/Agile approach to production. The term ‘fail fast’ is grounded in the Lean concept of continuous learning. Lean theory contends that learning is not a singular event, but rather a continuous process of trial and error. The Lean approach advocates that the smaller the ‘set’ of learning and the faster it takes place, the better. Thus, fail fast should really be ‘learn fast’ or ‘learn something small fast,’ but that’s not nearly as catchy. This is all grounded in the heavily researched area of human learning. Humans learn by trial and error. Lean simply says so should the organizations.
First, the nature of evolution discards noise. Much like the concept in biology, only fit, useful facts survive the evolution process. When exposed to more complex systems, noise goes the way of the dodo bird. A “possible SQL injection attack on MySQL” event becomes irrelevant when vulnerability reports show the targeted server isn’t running MySQL. As data becomes a more mature, evolved object the irrelevant events fall away.
When I was leading the Network Security Group at the US Naval Postgraduate School, I was overwhelmed with the degree of failure we experienced. The amount of events, complexity of investigations and immature security infrastructure created an environment of perpetual failure. After gathering the basic business metrics I discussed in Metering Incident Response 101 I decided it was time to push the problem up the chain of command.