Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all articles
Browse latest Browse all 12749

Container security orchestration with Falco and Splunk Phantom

$
0
0

Container security orchestration allows to define within your security policy how you are going to respond to your different container security incidents. These responses can be automated in what is called security playbooks. This way, you can define and orchestrate multiple workflows involving different software both for sourcing and responding. This is how Falco and Splunk Phantom can be integrated together to do this.

What is Splunk Phantom?

Phantom is a security orchestration platform, part of Splunk product portfolio. Phantom collects security events and reports from different sources, providing a unified security operations engine on top of them. With Phantom, you can automate tasks through security playbooks, orchestrate workflows and support a broad range of SOC (Security Operations Center) functions including events, case management, collaboration and reporting.

Imagine that to implement security on your Kubernetes cluster you have network perimeter security from your cloud provider, image scanning from a few different places because your use multiple registries, host OS software updates notifications and IDS container runtime security monitor like Falco. With Phantom, you can unify the events coming from these four sources and create your own “security control center” with aggregated reporting and unified incident response workflows.

#Container #security orchestration with @sysdig #Falco and @splunk #Phantom

Click to tweet

How integrate Falco and Phantom for container security orchestration?

Falco does an awesome job detecting anomalous runtime activity in your container fleet . For example: someone executing an interactive shell in a container; a container spawning suspicious process like a webshell, a rootkit or a cryptominer, an unexpected network connection, like a new outgoing connection from a database; or an application reading credentials files long after was started or writing files where it shouldn’t.

But Falco just emits security events, and you need to send those somewhere else for processing, alerting, maybe triggering some kind of incident response reaction and in the long term auditing, reporting and storage. Phantom is great at doing these, so publishing Falco events into Phantom made a lot of sense.Falco adds value to Phantom providing container and Kubernetes security insights. Phantom allows Falco to trigger incident response workflows for container security orchestration, store and report on the container security events .

In order to integrate Falco and Phantom together for container security orchestration, we will be using our Kubernetes response engine to publish Falco events into NATS message broker. Then a Function as a Service will be executed through Kubeless, which is subscribed to the message broker topics. This FaaS will format and forward our Falco container security events into Splunk Phantom:


Container security orchestration with Falco and Splunk Phantom

So let’s deploy this setup in our Kubernetes cluster. First, make sure kubectl is pointing to the desired Kubernetes cluster and then execute:

$ git clone https://github.com/draios/falco.git $ cd integrations/kubernetes-response-engine/deployment/cncf $ make

In case you don’t have Helm already running on your cluster, make sure you commit tiller’s RBAC configuration and then:

$ helm init --service-account tiller $ helm install --name sysdig-falco --set integrations.natsOutput.enabled=true stable/falco

After a couple of minutes (don’t worry if the pods restart a few times before entering Running state, there are some dependencies between them), you should have all the mentioned components up and running:

default sysdig-falco-frgp9 2/2 Running 1 33s default sysdig-falco-snjq7 2/2 Running 1 31s kubeless kubeless-controller-manager-d6db997c-c8gg9 1/1 Running 0 2m kubeless nats-trigger-controller-5c6659cb6f-4g2nq 1/1 Running 0 2m nats-io nats-1 1/1 Running 0 2m nats-io nats-2 1/1 Running 0 2m nats-io nats-3 1/1 Running 0 1m nats-io nats-operator-847684f6c7-mgmtt 1/1 Running 0 4m

Now that we have the Kubernetes response engine, we need to deploy Phantom. If you don’t have a Phantom commercial licence, you can get a free trial registering here . Once you register and log in, you will be able to download an OVA virtual machine image.

We need this VM instance to be reachable from your Kubernetes cluster. Different options here: you can run the image locally and then set up NAT forwarders or upload the VM to AWS a assign it a public IP address.

With both sides up and running, you can next deploy the Kubeless function that will forward events to Phantom.

Make sure you have pipenv and kubeless installed in your environment as described in Deploying Kubernetes Response Engine Components: NATS and Kubeless framework and then go to the Falco repository you cloned earlier:

$ cd falco/integrations/kubernetes-response-engine/playbooks/ $ ./deploy_playbook -p phantom -e PHANTOM_USER=$SOMEUSER -e PHANTOM_PASSWORD=$SOMEPASSWORD -e PHANTOM_BASE_URL=$PHANTOM_URL -t "falco.*.*"

PHANTOM_USER= and PHANTOM_PASSWORD= will be the credentials required to login into Phantom. PHANTOM_BASE_URL= will be the endpoint where Phantom is reachable.

Also you would have to add -e VERIFY_SSL=False if your Phantom instance doesn’t have a valid SSL certificate.

The function will take a couple of minutes to be ready, you can check it’s current state with:

$ kubeless function ls NAME NAMESPACE HANDLER RUNTIME DEPENDENCIES STATUS falco-phantom default phantom.handler python3.6 cachetools==2.1.0 1/1 READY

Once ready, Falco events will automatically show up in the Phantom interface:


Container security orchestration with Falco and Splunk Phantom

Let’s run a simple example, spawning a shell in one of our containers, something that Falco detects by default:

$ kubectl exec -it sysdig-falco-frgp9 bash Here we will immediately receive a new event Terminal shell in container together with all the metadata like specific pod and command that was used, visualized in a timeline. From here we can assign the issue to a support person, trigger a mail notif

Viewing all articles
Browse latest Browse all 12749

Latest Images

Trending Articles





Latest Images