Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all articles
Browse latest Browse all 5816

Strelka - Scanning Files At Scale With Python And ZeroMQ

$
0
0

Strelka is a real-time file scanning system used for threat hunting, threat detection, and incident response. Based on the design established by Lockheed Martin's Laika BOSS and similar projects (see: related projects), Strelka's purpose is to perform file extraction and metadata collection at huge scale.
Strelka differs from its sibling projects in a few significant ways:
  • Codebase is Python 3 (minimum supported version is 3.6)
  • Designed for non-interactive, distributed systems (network security monitoring sensors, live response scripts, disk/memory extraction, etc.)
  • Supports direct and remote file requests (Amazon S3, Google Cloud Storage, etc.) with optional encryption and authentication
  • Uses widely supported networking, messaging, and data libraries/formats (ZeroMQ, protocol buffers, YAML, JSON)
  • Built-in scan result logging and log management (compatible with Filebeat/ElasticStack, Splunk, etc.)

Frequently Asked Questions

"Who is Strelka?"
Strelka is one of the second generation Soviet space dogs to achieve orbital spaceflight -- the name is an homage to Lockheed Martin's Laika BOSS, one of the first public projects of this type and from which Strelka's core design is based.

"Why would I want a file scanning system?"
File metadata is an additional pillar of data (alongside network, endpoint, authentication, and cloud) that is effective in enabling threat hunting, threat detection, and incident response and can help event analysts and incident responders bridge visibility gaps in their environment. This type of system is especially useful for identifying threat actors during KC3 and KC7. For examples of what Strelka can do, please read the use cases.

"Should I switch from my current file scanning system to Strelka?"
It depends -- we recommend reviewing the features of each and choosing the most appropriate tool for your needs. We believe the most significant motivating factors for switching to Strelka are:

"Are Strelka's scanners compatible with Laika BOSS, File Scanning Framework, or Assemblyline?"
Due to differences in design, Strelka's scanners are not directly compatible with Laika BOSS, File Scanning Framework, or Assemblyline. With some effort, most scanners can likely be ported to the other projects.

"Is Strelka an intrusion detection system (IDS)?"
Strelka shouldn't be thought of as an IDS, but it can be used for threat detection through YARA rule matching and downstream metadata interpretation. Strelka's design follows the philosophy established by other popular metadata collection systems (Bro, Sysmon, Volatility, etc.): it extracts data and leaves the decision-making up to the user.

"Does it work at scale?"
Everyone has their own definition of "at scale," but we have been using Strelka and systems like it to scan up to 100 million files each day for over a year and have never reached a point where the system could not scale to our needs -- as file volume and diversity increases, horizontally scaling the system should allow you to scan any number of files.

"Doesn't this use a lot of bandwidth?"
Yep! Strelka isn't designed to operate in limited bandwidth environments, but we have experimented with solutions to this and there are tricks you can use to reduce bandwidth. These are what we've found most successful:
  • Reduce the total volume of files sent to Strelka
  • Use a tracking system to only send unique files to Strelka (networked Redis servers are especially useful for this)
  • Use traffic control (tc) to shape connections to Strelka

"Should I run my Strelka cluster on my Bro/Suricata network sensor?"
No! Strelka clusters run CPU-intensive processes that will negatively impact system-critical applications like Bro and Suricata. If you want to integrate a network sensor with Strelka, then use strelka_dirstream.py. This utility is capable of sending millions of files per day from a single network sensor to a Strelka cluster without impacting system-critical applications.

"I have other questions!"
Please file an issue or contact the project team at TTS-CFC-OpenSource@target.com. The project lead can also be reached on Twitter at @jshlbrd.

Installation
The recommended operating system for Strelka is Ubuntu 18.04 LTS (Bionic Beaver) -- it may work with earlier versions of Ubuntu if the appropriate packages are installed. We recommend using the Docker container for production deployments and welcome pull requests that add instructions for installing on other operating systems.

Ubuntu 18.04 LTS
  1. Update packages and install build packages
apt-get update && apt-get install --no-install-recommends automake build-essential curl gcc git libtool make python3-dev python3-pip python3-wheel
  1. Install runtime packages
apt-get install --no-install-recommends antiword libarchive-dev libfuzzy-dev libimage-exiftool-perl libmagic-dev libssl-dev python3-setuptools tesseract-ocr unrar upx jq
  1. Install pip3 packages
    pip3 install beautifulsoup4 boltons boto3 gevent google-cloud-storage html5lib inflection interruptingcow jsbeautifier libarchive-c lxml git+https://github.com/aaronst/macholibre.git olefile oletools pdfminer.six pefile pgpdump3 protobuf pyelftools pygments pyjsparser pylzma git+https://github.com/jshlbrd/pyopenssl.git python-docx git+https://github.com/jshlbrd/python-entropy.git python-keystoneclient python-magic python-swiftclient pyyaml pyzmq rarfile requests rpmfile schedule ssdeep tnefparse
  2. Install YARA
curl -OL https://github.com/VirusTotal/yara/archive/v3.8.1.tar.gz
tar -zxvf v3.8.1.tar.gz
cd yara-3.8.1/
./bootstrap.sh
./configure --with-crypto --enable-dotnet --enable-magic
make && make install && make check
echo "/usr/local/lib" >> /etc/ld.so.conf
ldconfig
  1. Install yara-python
curl -OL https://github.com/VirusTotal/yara-python/archive/v3.8.1.tar.gz  
tar -zxvf v3.8.1.tar.gz
cd yara-python-3.8.1/
python3 setup.py build --dynamic-linking
python3 setup.py install
  1. Create Strelka directories
    mkdir /var/log/strelka/ && mkdir /opt/strelka/
  2. Clone this repository
    git clone https://github.com/target/strelka.git /opt/strelka/
  3. Compile the Strelka protobuf
    cd /opt/strelka/server/ && protoc --python_out=. strelka.proto
  4. (Optional) Install the Strelka utilities
    cd /opt/strelka/ && python3 setup.py -q build && python3 setup.py -q install && python3 setup.py -q clean --all

Docker
  1. Clone this repository
    git clone https://github.com/target/strelka.git /opt/strelka/
  2. Build the container
    cd /opt/strelka/ && docker build -t strelka .

Quickstart
By default, Strelka is configured to use a minimal "quickstart" deployment that allows users to test the system. This configuration is not recommended for production deployments. Using two Terminal windows, do the following:
Terminal 1
$ strelka.py
Terminal 2:
$ strelka_user_client.py --broker 127.0.0.1:5558 --path <path to the file to scan>
$ cat /var/log/strelka/*.log | jq .
Terminal 1 runs a Strelka cluster (broker, 4 workers, and log rotation) with debug logging and Terminal 2 is used to send file requests to the cluster and read the scan results.

Deployment

Utilities
Strelka's design as a distributed system creates the need for client-side and server-side utilities. Client-side utilities provide methods for sending file requests to a cluster and server-side utilities provide methods for distributing and scanning files sent to a cluster.

strelka.py
strelka.py is a non-interactive, server-side utility that contains everything needed for running a large-scale, distributed Strelka cluster. This includes:
  • Capability to run servers in any combination of broker/workers
    • Broker distributes file tasks to workers
    • Workers perform file analysis on tasks
  • On-disk scan result logging
    • Configurable log rotation and management
    • Compatible with external log shippers (e.g. Filebeat, Splunk Universal Forwarder, etc.)
  • Supports encryption and authentication for connections between clients and brokers
  • Self-healing child processes (brokers, workers, log management)
This utility is managed with two configuration files: etc/strelka/strelka.yml and etc/strelka/pylogging.ini.
The help page for strelka.py is shown below:
usage: strelka.py [options]

runs Strelka as a distributed cluster.

optional arguments:
-h, --help show this help message and exit
-d, --debug enable debug messages to the console
-c STRELKA_CFG, --strelka-config STRELKA_CFG
path to strelka configuration file
-l LOGGING_INI, --logging-ini LOGGING_INI
path to python logging configuration file

strelka_dirstream.py
strelka_dirstream.py is a non-interactive, client-side utility used for sending files from a directory to a Strelka cluster in near real-time. This utility uses inotify to watch the directory and sends files to the cluster as soon as possible after they are written.
Additionally, for select file sources, this utility can parse metadata embedded in the file's filename and send it to the cluster as external metadata. Bro network sensors are currently the only supported file source, but other application-specific sources can be added.
Using the utility with Bro requires no modification of the Bro source code, but it does require the network sensor to run a Bro script that enables file extraction. We recommend using our stub Bro script (etc/bro/extract-strelka.bro) to extract files. Other extraction scripts will also work, but they will not parse Bro's metadata.
This utility is managed with one configuration file: etc/dirstream/dirstream.yml.
The help page for strelka_dirstream.py is shown below:
usage: strelka_dirstream.py [options]

sends files from a directory to a Strelka cluster in near real-time.

optional arguments:
-h, --help show this help message and exit
-d, --debug enable debug messages to the console
-c DIRSTREAM_CFG, --dirstream-config DIRSTREAM_CFG
path to dirstream configuration file

strelka_user_client.py
strelka_user_client.py is a user-driven, client-side utility that is used for sending ad-hoc file requests to a cluster. This client should be used when file analysis is needed for a specific file or group of files -- it is explicitly designed for users and should not be expected to perform long-lived or fully automated file requests. We recommend using this utility as an example of what is required in building new client utilities.
Using this utility, users can send three types of file requests:
The help page for strelka_user_client.py is shown below:
usage: strelka_user_client.py [options]

sends ad-hoc file requests to a Strelka cluster.

optional arguments:
-h, --help show this help message and exit
-d, --debug enable debug messages to the console
-b BROKER, --broker BROKER
network address and network port of the broker (e.g.
127.0.0.1:5558)
-p PATH, --path PATH path to the file or directory of files to send to the
broker
-l LOCATION, --location LOCATION
JSON representation of a location for the cluster to
retrieve files from
-t TIMEOUT, --timeout TIMEOUT
amount of time (in seconds) to wait until a file
transfer times out
-bpk BROKER_PUBLIC_KEY, --broker-public-key BROKER_PUBLIC_KEY
location of the broker Curve public key certificate
(this option enables curve encryption and must be used
if the broker has curve enabled)
-csk CLIENT_SECRET_KEY, --client-secret-key CLIENT_SECRET_KEY
location of the client Curve secret key certificate
(this option enables curve encryption and must be used
if the broker has curve enabled)
-ug, --use-green determines if PyZMQ green should be used, which can
increase performance at the risk of message loss

generate_curve_certificates.py
generate_curve_certificates.py is a utility used for generating broker and worker Curve certificates. This utility is required for setting up Curve encryption/authentication.
The help page for generate_curve_certificates.py is shown below:
usage: generate_curve_certificates.py [options]

generates curve certificates used by brokers and clients.

optional arguments:
-h, --help show this help message and exit
-p PATH, --path PATH path to store keys in (defaults to current working
directory)
-b, --broker generate curve certificates for a broker
-c, --client generate curve certificates for a client
-cf CLIENT_FILE, --client-file CLIENT_FILE
path to a file containing line-separated list of
clients to generate keys for, useful for creating many
client keys at once

validate_yara.py
validate_yara.py is a utility used for recursively validating a directory of YARA rules files. This can be useful when debugging issues related to the ScanYara scanner.
The help page for validate_yara.py is shown below:
usage: validate_yara.py [options]

validates YARA rules files.

optional arguments:
-h, --help show this help message and exit
-p PATH, --path PATH path to directory containing YARA rules
-e, --error boolean that determines if warnings should cause
errors

Configuration Files
Strelka uses YAML for configuring client-side and server-side utilities. We recommend using the default configurations and modifying the options as needed.

Strelka Configuration (strelka.py)
Strelka's cluster configuration file is stored in etc/strelka/strelka.yml and contains three sections: daemon, remote, and scan.

Daemon Configuration
The daemon configuration contains five sub-sections: processes, network, broker, workers, and logrotate.
The "processes" section controls the processes launched by the daemon. The configuration options are:
  • "run_broker": boolean that determines if the server should run a Strelka broker process (defaults to True)
  • "run_workers": boolean that determines if the server should run Strelka worker processes (defaults to True)
  • "run_logrotate": boolean that determines if the server should run a Strelka log rotation process (defaults to True)
  • "worker_count": number of workers to spawn (defaults to 4)
  • "shutdown_timeout": amount of time (in seconds) that will elapse before the daemon forcibly kills child processes after they have received a shutdown command (defaults to 45 seconds)
The "network" section controls network connectivity. The configuration options are:
  • "broker": network address of the broker (defaults to 127.0.0.1)
  • "request_socket_port": network port used by clients to send file requests to the broker (defaults to 5558)
  • "task_socket_port": network port used by workers to receive tasks from the broker (defaults to 5559)
The "broker" section controls settings related to the broker process. The configuration options are:
  • "poller_timeout": amount of time (in milliseconds) that the broker polls for client requests and worker statuses (defaults to 1000 milliseconds)
  • "broker_secret_key": location of the broker Curve secret key certificate (enables Curve encryption, requires clients to use Curve, defaults to None)
  • "client_public_keys": location of the directory containing client Curve public key certificates (enables Curve encryption and authentication, requires clients to use Curve, defaults to None)
  • "prune_frequency": frequency (in seconds) at which the broker prunes dead workers (defaults to 5 seconds)
  • "prune_delta": delta (in seconds) that must pass since a worker last checked in with the broker before it is considered dead and is pruned (defaults to 10 seconds)
The "workers" section controls settings related to worker processes. The configuration options are:
  • "task_socket_reconnect": amount of time (in milliseconds) that the task socket will attempt to reconnect in the event of TCP disconnection, this will have additional jitter applied (defaults to 100ms plus jitter)
  • "task_socket_reconnect_max": maximum amount of time (in milliseconds) that the task socket will attempt to reconnect in the event of TCP disconnection, this will have additional jitter applied (defaults to 4000ms plus jitter)
  • "poller_timeout": amount of time (in milliseconds) that workers poll for file tasks (defaults to 1000 milliseconds)
  • "file_max": number of files a worker will process before shutting down (defaults to 10000)
  • "time_to_live": amount of time (in minutes) that a worker will run before shutting down (defaults to 30 minutes)
  • "heartbeat_frequency": frequency (in seconds) at which a worker sends a heartbeat to the broker if it has not received any file tasks (defaults to 10 seconds)
  • "log_directory": location where worker scan results are logged to (defaults to /var/log/strelka/)
  • "log_field_case": field case ("camel" or "snake") of the scan result log file data (defaults to camel)
  • "log_bundle_events": boolean that determines if scan results should be bundled in single event as an array or in multiple events (defaults to True)
The "logrotate" section controls settings related to the log rotation process. The configuration options are:
  • "directory": directory to run log rotation on (defaults to /var/log/strelka/)
  • "compression_delta": delta (in minutes) that must pass since a log file was last modified before it is compressed (defaults to 15 minutes)
  • "deletion_delta": delta (in minutes) that must pass since a compressed log file was last modified before it is deleted (defaults to 360 minutes / 6 hours)

Remote Configuration
The remote configuration contains one sub-section: remote.
The "remote" section controls how workers retrieve files from remote file stores. Google Cloud Storage, Amazon S3, OpenStack Swift, and HTTP file stores are supported. All options in this configuration file are optionally read from environment variables if they are "null". The configuration options are:
  • "remote_timeout": amount of time (in seconds) to wait before timing out individual file retrieval
  • "remote_retries": number of times individual file retrieval will be re-attempted in the event of a timeout
  • "google_application_credentials": path to the Google Cloud Storage JSON credentials file
  • "aws_access_key_id": AWS access key ID
  • "aws_secret_access_key": AWS secret access key
  • "aws_default_region": default AWS region
  • "st_auth_version": OpenStack authentication version (defaults to 3)
  • "os_auth_url": OpenStack Keystone authentication URL
  • "os_username": OpenStack username
  • "os_password": OpenStack password
  • "os_cert": OpenStack Keystone certificate
  • "os_cacert": OpenStack Keystone CA Certificate
  • "os_user_domain_name": OpenStack user domain
  • "os_project_name": OpenStack project name
  • "os_project_domain_name": OpenStack project domain
  • "http_basic_user": HTTP Basic authentication username
  • "http_basic_pass": HTTP Basic authentication password
  • "http_verify": path to the CA bundle (file or directory) used for SSL verification (defaults to False, no verification)

Scan Configuration
The scan configuration contains two sub-sections: distribution and scanners.
The "distribution" section controls how files are distributed through the system. The configuration options are:
  • "close_timeout": amount of time (in seconds) that a scanner can spend closing itself (defaults to 30 seconds)
  • "distribution_timeout": amount of time (in seconds) that a single file can be distributed to all scanners (defaults to 1800 seconds / 30 minutes)
  • "scanner_timeout": amount of time (in seconds) that a scanner can spend scanning a file (defaults to 600 seconds / 10 minutes, can be overridden per-scanner)
  • "maximum_depth": maximum depth that child files will be processed by scanners
  • "taste_mime_db": location of the MIME database used to taste files (defaults to None, system default)
  • "taste_yara_rules": location of the directory of YARA files that contains rules used to taste files (defaults to etc/strelka/taste/)
The "scanners" section controls which scanners are assigned to each file; each scanner is assigned by mapping flavors, filenames, and sources from this configuration to the file. "scanners" must always be a dictionary where the key is the scanner name (e.g. ScanZip) and the value is a list of dictionaries containing values for mappings, scanner priority, and scanner options.
Assignment occurs through a system of positive and negative matches: any negative match causes the scanner to skip assignment and at least one positive match causes the scanner to be assigned. A unique identifier (*) is used to assign scanners to all flavors. See File Distribution, Scanners, Flavors, and Tasting for more details on flavors.
Below is a sample configuration that runs the scanner "ScanHeader" on all files and the scanner "ScanRar" on files that match a YARA rule named "rar_file".
scanners:
'ScanHeader':
- positive:
flavors:
- '*'
priority: 5
options:
length: 50
'ScanRar':
- positive:
flavors:
- 'rar_file'
priority: 5
options:
limit: 1000
The "positive" dictionary determines which flavors, filenames, and sources cause the scanner to be assigned. Flavors is a list of literal strings while filenames and sources are regular expressions. One positive match will assign the scanner to the file.
Below is a sample configuration that shows how RAR files can be matched against a YARA rule (rar_file), a MIME type (application/x-rar), and a filename (any that end with .rar).
scanners:
'ScanRar':
- positive:
flavors:
- 'application/x-rar'
- 'rar_file'
filename: '\.rar$'
priority: 5
options:
limit: 1000
Each scanner also supports negative matching through the "negative" dictionary. Negative matches occur before positive matches, so any negative match guarantees that the scanner will not be assigned. Similar to positive matches, negative matches support flavors, filenames, and sources.
Below is a sample configuration that shows how RAR files can be positively matched against a YARA rule (rar_file) and a MIME type (application/x-rar), but only if they are not negatively matched against a filename (\.rar$). This configuration would cause ScanRar to only be assigned to RAR files that do not have the extension ".rar".
scanners:
'ScanRar':
- negative:
filename: '\.rar$'
positive:
flavors:
- 'application/x-rar'
- 'rar_file'
priority: 5
options:
limit: 1000
Each scanner supports multiple mappings -- this makes it possible to assign different priorities and options to the scanner based on the mapping variables. If a scanner has multiple mappings that match a file, then the first mapping wins.
Below is a sample configuration that shows how a single scanner can apply different options depending on the mapping.
scanners:
'ScanX509':
- positive:
flavors:
- 'x509_der_file'
priority: 5
options:
type: 'der'
- positive:
flavors:
- 'x509_pem_file'
priority: 5
options:
type: 'pem'

Python Logging Configuration (strelka.py)
strelka.py uses an ini file (etc/strelka/pylogging.ini) to manage cluster-level statistics and information output by the Python logger. By default, this configuration file will log data to stdout and disable logging for packages imported by scanners.

DirStream Configuration (strelka_dirstream.py)
Strelka's dirstream configuration file is stored in etc/dirstream/dirstream.yml and contains two sub-sections: processes and workers.
The "processes" section controls the processes launched by the utility. The configuration options are:
  • "shutdown_timeout": amount of time (in seconds) that will elapse before the utility forcibly kills child processes after they have received a shutdown command (defaults to 10 seconds)
The "workers" section controls directory settings and network settings for each worker that sends files to the Strelka cluster. This section is a list; adding multiple directory/network settings makes it so multiple directories can be monitored at once. The configuration options are:
  • "directory": directory that files are sent from (defaults to None)
  • "source": application that writes files to the directory, used to control metadata parsing functionality (defaults to None)
  • "meta_separator": unique string used to separate pieces of metadata in a filename, used to parse metadata and send it along with the file to the cluster (defaults to "S^E^P")
  • "file_mtime_delta": delta (in seconds) that must pass since a file was last modified before it is sent to the cluster (defaults to 5 seconds)
  • "delete_files": boolean that determines if files should be deleted after they are sent to the cluster (defaults to False)
  • "broker": network address and network port of the broker (defaults to "127.0.0.1:5558")
  • "timeout": amount of time (in seconds) to wait for a file to be successfully sent to the broker (defaults to 10)
  • "use_green": boolean that determines if PyZMQ green should be used (this can increase performance at the risk of message loss, defaults to True)
  • "broker_public_key": location of the broker Curve public key certificate (enables Curve encryption, must be used if the broker has Curve enabled)
  • "client_secret_key": location of the client Curve secret key certificate (enables Curve encryption, must be used if the broker has Curve enabled)
To enable Bro support, a Bro file extraction script must be run by the Bro application; Strelka's file extraction script is stored in etc/bro/extract-strelka.bro and includes variables that can be redefined at Bro runtime. These variables are:
  • "mime_table": table of strings (Bro source) mapped to a set of strings (Bro mime_type) -- this variable defines which file MIME types Bro extracts and is configurable based on the location Bro identified the file (e.g. extract application/x-dosexec files from SMTP, but not SMB or FTP)
  • "filename_re": regex pattern that can extract files based on Bro filename
  • "unknown_mime_source": set of strings (Bro source) that determines if files of an unknown MIME type should be extracted based on the location Bro identified the file (e.g. extract unknown files from SMTP, but not SMB or FTP)
  • "meta_separator": string used in extracted filenames to separate embedded Bro metadata -- this must match the equivalent value in etc/dirstream/dirstream.yml
  • "directory_count_interval": interval used to schedule how often the script checks the file count in the extraction directory
  • "directory_count_threshold": int that is used as a trigger to temporarily disable file extraction if the file count in the extraction directory reaches the threshold

Encryption and Authentication
Strelka has built-in, optional encryption and authentication for client connections provided by CurveZMQ.

CurveZMQ
CurveZMQ (Curve) is ZMQ's encryption and authentication protocol. Read more about it here.

Using Curve
Strelka uses Curve to encrypt and authenticate connections between clients and brokers. By default, Strelka's Curve support is setup to enable encryption but not authentication.
To enable Curve encryption, the broker must be loaded with a private key -- any clients connecting to the broker must have the broker's public key to successfully connect.
To enable Curve encryption and authentication, the broker must be loaded with a private key and a directory of client public keys -- any clients connecting to the broker must have the broker's public key and have their client key loaded on the broker to successfully connect.
The generate_curve_certificates.py utility can be used to create client and broker certificates.

Clusters
The following are recommendations and considerations to keep in mind when deploying clusters.

General Recommendations
The following recommendations apply to all clusters:
  • Do not run workers on the same server as a broker
    • This puts the health of the entire cluster at risk if the server becomes over-utilized
  • Do not over-allocate workers to CPUs
    • 1 worker per CPU
  • Allocate at least 1GB RAM per worker
    • If workers do not have enough RAM, then there will be excessive memory errors
    • Big files (especially compressed files) require more RAM
    • In large clusters, diminishing returns begin above 4GB RAM per worker
  • Allocate as much RAM as reasonable to the broker
    • ZMQ messages are stored entirely in memory -- in large deployments with many clients, the broker may use a lot of RAM if the workers cannot keep up with the number of file tasks

Sizing Considerations
Multiple variables should be considered when determining the appropriate size for a cluster:
  • Number of file requests per second
  • Type of file requests
    • Remote file requests take longer to process than direct file requests
  • Diversity of files requested
    • Binary files take longer to scan than text files
  • Number of YARA rules deployed
    • Scanning a file with 50,000 rules takes longer than scanning a file with 50 rules
The best way to properly size a cluster is to start small, measure performance, and scale out as needed.

Docker Considerations
Below is a list of considerations to keep in mind when running a cluster with Docker containers:
  • Share volumes, not files, with the container
    • Strelka's workers will read configuration files and YARA rules files when they startup -- sharing volumes with the container ensures that updated copies of these files on the localhost are reflected accurately inside the container without needing to restart the container
  • Increase stop-timeout
    • By default, Docker will forcibly kill a container if it has not stopped after 10 seconds -- this value should be increased to greater than the shutdown_timeout value in etc/strelka/strelka.yml
  • Increase shm-size
    • By default, Docker limits a container's shm size to 64MB -- this can cause errors with Strelka scanners that utilize tempfile
  • Set logging options
    • By default, Docker has no log limit for logs output by a container

Management
Due to its distributed design, we recommend using container orchestration (e.g. Kubernetes) or configuration management/provisioning (e.g. Ansible, SaltStack, etc.) systems for managing clusters.



Viewing all articles
Browse latest Browse all 5816

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>