Installing and configuring ELK Stack (ElasticSearch, Logstash, Kibana) for iptables

From Fyzix
Jump to: navigation, search

References:

Prerequisites

echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" > /etc/apt/sources.list.d/webupd8team-java.list
echo "deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" >> /etc/apt/sources.list.d/webupd8team-java.list
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys EEA14886
apt-get update
apt-get upgrade
apt-get -y install oracle-java8-installer

Elasticsearch

Installation

Run the following command to import the Elasticsearch public GPG key into apt:

wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Create the Elasticsearch source list:

echo "deb http://packages.elastic.co/elasticsearch/2.x/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list
apt-get update
apt-get upgrade

Install Elasticsearch with this command:

apt-get -y install elasticsearch

Configuration

Elasticsearch is now installed. Let's edit the configuration:

vi /etc/elasticsearch/elasticsearch.yml

You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can't read your data or shutdown your Elasticsearch cluster through the HTTP API. Find the line that specifies network.host, uncomment it, and replace its value with "localhost" so it looks like this:

network.host: localhost

Restart Elasticsearch

service elasticsearch restart

Then run the following command to start Elasticsearch on boot up:

update-rc.d elasticsearch defaults 95 10
systemctl enable elasticsearch
systemctl start elasticsearch

Kibana

Installation

Before installing Kibana, let's set up a kibana user and group, which will own and run Kibana:

groupadd -g 999 kibana
useradd -u 999 -g 999 kibana

Download Kibana 4 to the source directory

Latest download can be found at: https://www.elastic.co/downloads/kibana

mkdir -p /source
cd /source
wget https://download.elastic.co/kibana/kibana/kibana-4.4.1-linux-x64.tar.gz
tar xvf kibana-4.4.1-linux-x64.tar.gz

Move Kibana to a more appropriate location

mkdir -p /opt/kibana
cd /source/kibana-4.4.1-linux-x64
mv * /opt/kibana

Fix permissions

chown -R kibana:kibana /opt/kibana

Kibana can be started by running /opt/kibana/bin/kibana, but we want it to run as a service. Download a Kibana init script with this command:

cd /etc/init.d && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-init
cd /etc/default && sudo curl -o kibana https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/fc5025c3fc499ad8262aff34ba7fde8c87ead7c0/kibana-4.x-default

Now enable the Kibana service, and start it:

chmod +x /etc/init.d/kibana
update-rc.d kibana defaults 96 9
service kibana start

Configuration

In the Kibana configuration file, find the line that specifies server.host, and replace the IP address ("0.0.0.0" by default) with "localhost":

Modify /opt/kibana/config/kibana.yml

server.host: "localhost"

Nginx

Because we configured Kibana to listen on localhost, we must set up a reverse proxy to allow external access to it. We will use Nginx for this purpose.

Note: If you already have an Nginx instance that you want to use, feel free to use that instead. Just make sure to configure Kibana so it is reachable by your Nginx server (you probably want to change the host value, in /opt/kibana/config/kibana.yml, to your Kibana server's private IP address or hostname). Also, it is recommended that you enable SSL/TLS.

Installation

apt-get install nginx apache2-utils

Use htpasswd to create an admin user, called "kibanaadmin" (you should use another name), that can access the Kibana web interface:

htpasswd -c /etc/nginx/htpasswd.users kibanaadmin

Enter a password at the prompt. Remember this login, as you will need it to access the Kibana web interface.

Configuration

Now open the Nginx default server block in your favorite editor. We will use vi:

vi /etc/nginx/sites-available/default

Delete the file's contents, and paste the following code block into the file. Be sure to update the server_name to match your server's name:

server {
    listen 80;

    server_name cervin.home;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;        
    }
}

Save and exit. This configures Nginx to direct your server's HTTP traffic to the Kibana application, which is listening on localhost:5601. Also, Nginx will use the htpasswd.users file, that we created earlier, and require basic authentication.

Restart Nginx

service nginx restart

Logstash

Installation

The Logstash package is available from the same repository as Elasticsearch, and we already installed that public key, so let's create the Logstash source list:

echo 'deb http://packages.elasticsearch.org/logstash/2.1/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list
apt-get update
apt-get upgrade

Install Logstash with apt-get

apt-get install logstash

Configuration (IPTables specific)

Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs. Additionally, we will setup a reference to the grok patterns template in /etc/logstash/grok/ and download the Geo-IP database.

01-input.conf

This will create the input location of the log file to be ingested into logstash with elasticsearch storing the data on the backend.

These are based on the iptables log files we split up using rsyslog.d configuration. Reference:

/etc/logstash/conf.d/01-input.conf

input {
    file {
            type => "denied"
            path => "/var/log/firewall/denied.log"
    }
    file {
            type => "inbound"
            path => "/var/log/firewall/inbound.log"
    }
    file {
            type => "outbound"
            path => "/var/log/firewall/outbound.log"
    }
    file {
            type => "blockedhosts"
            path => "/var/log/firewall/blockedhosts.log"
    }
    file {
            type => "watched"
            path => "/var/log/firewall/watched.log"
    }
        file {
            type => "accepted"
            path => "/var/log/firewall/accepted.log"
    }
    file {
            type => "unauthssh"
            path => "/var/log/firewall/unauthssh.log"
    }
    file {
            type => "tripport"
            path => "/var/log/firewall/tripport.log"
    }
    file {
            type => "scan"
            path => "/var/log/firewall/scan.log"
    }
    file {
            type => "prerouting"
            path => "/var/log/firewall/prerouting.log"
    }
    file {
            type => "postrouting"
            path => "/var/log/firewall/postrouting.log"
    }
}

10-filter.conf

/etc/logstash/conf.d/10-filter.conf

filter {

        if [type] == "denied" {
        grok {
        break_on_match => true
        match => { "message" => "DENIED: " }
        add_tag => "iptables"
        add_tag => "iptables-denied"
        add_tag => "iptables-source-geo"
        }
}
        if [type] == "inbound" {
        grok {
        break_on_match => true
        match => { "message" => "INBOUND: " }
        add_tag => "iptables"
        add_tag => "iptables-inbound"
        add_tag => "iptables-source-geo"
        }
}
        if [type] == "outbound" {
        grok {
        break_on_match => true
        match => { "message" => "OUTBOUND: " }
        add_tag => "iptables"
        add_tag => "iptables-outbound"
        add_tag => "iptables-source-geo"
        }
}
        if [type] == "blockedhosts" {
        grok {
        break_on_match => true
        match => { "message" => "BLOCKEDHOSTS: " }
        add_tag => "iptables"
        add_tag => "iptables-blockedhosts"
        add_tag => "iptables-source-geo"
        }
}
        if [type] == "watched" {
        grok {
        break_on_match => true
        match => { "message" => "WATCHED: " }
        add_tag => "iptables"
        add_tag => "iptables-watched"
        add_tag => "iptables-destination-geo"
        }
}
        if [type] == "accepted" {
        grok {
        break_on_match => true
        match => { "message" => "ACCEPTED: " }
        add_tag => "iptables"
        add_tag => "iptables-accepted"
        add_tag => "iptables-source-geo"
        }
}
        if [type] == "unauthssh" {
        grok {
        break_on_match => true
        match => { "message" => "UNAUTH SSH: " }
        add_tag => "iptables"
        add_tag => "iptables-unauthssh"
        add_tag => "iptables-source-geo"
        }
}
        if [type] == "tripport" {
        grok {
        break_on_match => true
        match => { "message" => "TRIPPORT: " }
        add_tag => "iptables"
        add_tag => "iptables-tripport"
        add_tag => "iptables-source-geo"
        }
}
        if [type] == "scan" {
        grok {
        break_on_match => true
        match => { "message" => "SCAN: " }
        add_tag => "iptables"
        add_tag => "iptables-scan"
        add_tag => "iptables-source-geo"
        }
}
        if [type] == "prerouting" {
        grok {
        break_on_match => true
        match => { "message" => "PREROUTING: " }
        add_tag => "iptables"
        add_tag => "iptables-prerouting"
        add_tag => "iptables-source-geo"
        }
}
        if [type] == "postrouting" {
        grok {
        break_on_match => true
        match => { "message" => "POSTROUTING: " }
        add_tag => "iptables"
        add_tag => "iptables-postrouting"
        add_tag => "iptables-source-geo"
        }
}
        if ("iptables" in [tags]) {
        grok {
        named_captures_only => true
        patterns_dir => "/etc/logstash/grok/iptables.pattern"
        match => { "message" => "%{IPTABLES}" }
        }
}
        if ("iptables-source-geo" in [tags]) {
        geoip {
        source => "source_ip"
        database => "/etc/logstash/GeoLiteCity.dat"
        }
}
        if ("iptables-destination-geo" in [tags]) {
        geoip {
        source => "destination_ip"
        database => "/etc/logstash/GeoLiteCity.dat"
        }
}
        date {
        #use the field timestamp to match event time and
        #populate @timestamp field (used by Elasticsearch)
        #match => [ "timestamp", "MMM dd HH:mm:ss","MMM  dd HH:mm:ss"]
        match => [ "timestamp", "MMM dd YYY HH:mm:ss","MMM  d YYY HH:mm:ss","MMM  dd HH:mm:ss", "ISO8601" ]
        timezone => "US/Mountain"
        }
}

iptables.pattern

The above filters reference grok patterns. These patterns can be a real pain to setup. Note that if your log messages do not match the below patterns, you will need to use Grok Debugger to figure out the proper patterns. This can be tedious as you will need to take the broken log message flagged by _grokparsingerror and enter chunks of it slowly into the debugger until you can sort out a matching pattern.

However, the below patterns are a good reference.

mkdir -p /etc/logstash/grok

/etc/logstash/grok/iptables.pattern

# IPTABLES Pattern
IPTABLES1 %{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:firewall_hostname} kernel: %{GREEDYDATA:useless_timestamp} %{WORD:action}:?.*IN=(%{USERNAME:in_interface})?.*OUT=(%{USERNAME:out_interface})?.*MAC=(%{COMMONMAC:destination_mac}):(%{COMMONMAC:source_mac})?.*SRC=(%{IPV4:source_ip}).*DST=(%{IPV4:destination_ip})?.*LEN=(%{INT:header_length})?.*TOS=(%{WORD:precedence})?.*TTL=(%{WORD:ttl})?.*ID=(%{INT:id}).*PROTO=(%{WORD:protocol})?.*SPT=(%{INT:source_port})?.*DPT=(%{INT:destination_port})?.*WINDOW=(%{INT:window})?.*RES=(%{WORD:received_bits}) %{WORD:packet_type}?.*URGP=(%{INT:urgp})
IPTABLES2 %{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:firewall_hostname} kernel: %{GREEDYDATA:useless_timestamp} %{WORD:action}:?.*IN=(%{USERNAME:in_interface})?.*OUT=(%{USERNAME:out_interface})?.*MAC=(%{COMMONMAC:destination_mac}):(%{COMMONMAC:source_mac})?.*SRC=(%{IPV4:source_ip}).*DST=(%{IPV4:destination_ip})?.*LEN=(%{INT:header_length})?.*TOS=(%{WORD:precedence})?.*TTL=(%{WORD:ttl})?.*ID=(%{INT:id}) %{WORD:dont_fragement}.*PROTO=(%{WORD:protocol})?.*SPT=(%{INT:source_port})?.*DPT=(%{INT:destination_port})?.*WINDOW=(%{INT:window})?.*RES=(%{WORD:received_bits}) %{WORD:packet_type}?.*URGP=(%{INT:urgp})
IPTABLES3 %{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:firewall_hostname} kernel: %{GREEDYDATA:useless_timestamp} %{WORD:action}:?.*IN=(%{USERNAME:in_interface})?.*OUT=(%{USERNAME:out_interface})?.*MAC=(%{COMMONMAC:destination_mac}):(%{COMMONMAC:source_mac})?.*SRC=(%{IPV4:source_ip}).*DST=(%{IPV4:destination_ip})?.*LEN=(%{INT:header_length})?.*TOS=(%{WORD:precedence})?.*TTL=(%{WORD:ttl})?.*ID=(%{INT:id}).*PROTO=(%{WORD:protocol})?.*SPT=(%{INT:source_port})?.*DPT=(%{INT:destination_port})?.*LEN=(%{INT:header_length})
IPTABLES4 %{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:firewall_hostname} kernel: %{GREEDYDATA:useless_timestamp} %{WORD:action}:?.*IN=(%{USERNAME:in_interface})?.*OUT=(%{USERNAME:out_interface})?.*MAC=(%{COMMONMAC:destination_mac}):(%{COMMONMAC:source_mac})?.*SRC=(%{IPV4:source_ip}).*DST=(%{IPV4:destination_ip})?.*LEN=(%{INT:header_length})?.*TOS=(%{WORD:precedence})?.*TTL=(%{WORD:ttl})?.*ID=(%{INT:id}) %{WORD:dont_fragement}.*PROTO=(%{WORD:protocol})
IPTABLES5 %{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:firewall_hostname} kernel: %{GREEDYDATA:useless_timestamp} %{WORD:action}:?.*IN=(%{USERNAME:in_interface})?.*OUT=(%{USERNAME:out_interface})?.SRC=(%{IPV4:source_ip}).*DST=(%{IPV4:destination_ip})?.*LEN=(%{INT:header_length})?.*TOS=(%{WORD:precedence})?.*TTL=(%{WORD:ttl})?.*ID=(%{INT:id}) %{WORD:dont_fragement}.*PROTO=(%{WORD:protocol})?.*SPT=(%{INT:source_port})?.*DPT=(%{INT:destination_port})?.*WINDOW=(%{INT:window})?.*RES=(%{WORD:received_bits}) %{WORD:packet_type}?.*URGP=(%{INT:urgp})
IPTABLES6 %{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:firewall_hostname} kernel: %{GREEDYDATA:useless_timestamp} %{WORD:action}:?.*IN=(%{USERNAME:in_interface})?.*OUT=(%{USERNAME:out_interface})?.*SRC=(%{IPV4:source_ip}).*DST=(%{IPV4:destination_ip})?.*LEN=(%{INT:header_length})?.*TOS=(%{WORD:precedence})?.*TTL=(%{WORD:ttl})?.*ID=(%{INT:id}).*PROTO=(%{WORD:protocol})?.*SPT=(%{INT:source_port})?.*DPT=(%{INT:destination_port})?.*LEN=(%{INT:header_length})
IPTABLES7 %{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:firewall_hostname} kernel: %{GREEDYDATA:useless_timestamp} %{WORD:action}:?.*IN=(%{USERNAME:in_interface})?.*OUT=(%{USERNAME:out_interface})?.*MAC=(%{COMMONMAC:destination_mac}):(%{COMMONMAC:source_mac})?.*SRC=(%{IPV4:source_ip}).*DST=(%{IPV4:destination_ip})?.*LEN=(%{INT:header_length})?.*TOS=(%{WORD:precedence})?.*TTL=(%{WORD:ttl})?.*ID=(%{INT:id}).*PROTO=(%{WORD:protocol})?.*TYPE=(%{INT:type})?.*CODE=(%{INT:code})?.*ID=(%{INT:id})?.*SEQ=(%{INT:sequence})
IPTABLES8 %{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:firewall_hostname} kernel: %{GREEDYDATA:useless_timestamp} %{WORD:action}:?.*IN=(%{USERNAME:in_interface})?.*OUT=(%{USERNAME:out_interface})?.*MAC=(%{COMMONMAC:destination_mac}):(%{COMMONMAC:source_mac})?.*SRC=(%{IPV4:source_ip}).*DST=(%{IPV4:destination_ip})?.*LEN=(%{INT:header_length})?.*TOS=(%{WORD:precedence})?.*TTL=(%{WORD:ttl})?.*ID=(%{INT:id}) %{WORD:dont_fragement}.*PROTO=(%{WORD:protocol})?.*TYPE=(%{INT:type})?.*CODE=(%{INT:code})?.*ID=(%{INT:id})?.*SEQ=(%{INT:sequence})
IPTABLES9 %{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:firewall_hostname} kernel: %{GREEDYDATA:useless_timestamp} %{WORD:action}:?.*IN=(%{USERNAME:in_interface})?.*OUT=(%{USERNAME:out_interface})?.*SRC=(%{IPV4:source_ip}).*DST=(%{IPV4:destination_ip})?.*LEN=(%{INT:header_length})?.*TOS=(%{WORD:precedence})?.*TTL=(%{WORD:ttl})?.*ID=(%{INT:id}).*PROTO=(%{WORD:protocol})?.*SPT=(%{INT:source_port})?.*DPT=(%{INT:destination_port})?.*LEN=(%{INT:header_length})
IPTABLES10 %{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:firewall_hostname} kernel: %{GREEDYDATA:useless_timestamp} %{WORD:action}:?.*IN=(%{USERNAME:in_interface})?.*OUT=(%{USERNAME:out_interface})?.*SRC=(%{IPV4:source_ip}).*DST=(%{IPV4:destination_ip})?.*LEN=(%{INT:header_length})?.*TOS=(%{WORD:precedence})?.*TTL=(%{WORD:ttl})?.*ID=(%{INT:id}).*PROTO=(%{WORD:protocol})?.*TYPE=(%{INT:type})?.*CODE=(%{INT:code})?.*ID=(%{INT:id})?.*SEQ=(%{INT:sequence})
IPTABLES11 %{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:firewall_hostname} kernel: %{GREEDYDATA:useless_timestamp} %{WORD:action}:?.*IN=(%{USERNAME:in_interface})?.*OUT=(%{USERNAME:out_interface})?.*SRC=(%{IPV4:source_ip}).*DST=(%{IPV4:destination_ip})?.*LEN=(%{INT:header_length})?.*TOS=(%{WORD:precedence})?.*TTL=(%{WORD:ttl})?.*ID=(%{INT:id}).*PROTO=(%{WORD:protocol})?.*SPT=(%{INT:source_port})?.*DPT=(%{INT:destination_port})?.*WINDOW=(%{INT:window})?.*RES=(%{WORD:received_bits}) %{WORD:packet_type}?.*URGP=(%{INT:urgp})
IPTABLES12 %{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:firewall_hostname} kernel: %{GREEDYDATA:useless_timestamp} %{WORD:action}:?.*IN=(%{USERNAME:in_interface})?.*OUT=(%{USERNAME:out_interface})?.*PHYSIN=(%{USERNAME:physical_in_interface})?.*PHYSOUT=(%{USERNAME:physical_out_interface})?.*SRC=(%{IPV4:source_ip}).*DST=(%{IPV4:destination_ip})?.*LEN=(%{INT:header_length})?.*TOS=(%{WORD:precedence})?.*TTL=(%{WORD:ttl})?.*ID=(%{INT:id})?.*PROTO=(%{WORD:protocol})
IPTABLES13 %{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:firewall_hostname} kernel: %{GREEDYDATA:useless_timestamp} %{WORD:action}:?.*IN=(%{USERNAME:in_interface})?.*OUT=(%{USERNAME:out_interface})?.*PHYSIN=(%{USERNAME:physical_in_interface})?.*MAC=(%{COMMONMAC:destination_mac}):(%{COMMONMAC:source_mac})?.*SRC=(%{IPV4:source_ip}).*DST=(%{IPV4:destination_ip})?.*LEN=(%{INT:header_length})?.*TOS=(%{WORD:precedence})?.*TTL=(%{WORD:ttl})?.*ID=(%{INT:id}).*PROTO=(%{WORD:protocol})
IPTABLES (?:%{IPTABLES1}|%{IPTABLES2}|%{IPTABLES3}|%{IPTABLES4}|%{IPTABLES5}|%{IPTABLES6}|%{IPTABLES7}|%{IPTABLES8}|%{IPTABLES9}|%{IPTABLES10}|%{IPTABLES11}|%{IPTABLES12}|%{IPTABLES13})

20-output.conf

This defines the output from logstash to elasticsearch running on localhost port 9200.

/etc/logstash/conf.d/20-output.conf

output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}

Test your Logstash configuration with this command:

service logstash configtest

It should display Configuration OK if there are no syntax errors. Otherwise, try and read the error output to see what's wrong with your Logstash configuration.

Restart Logstash, and enable it, to put our configuration changes into effect:

service logstash restart
updaterc.d logstash defaults 96 9
systemctl enable logstash
systemctl start logstash

Use the following command to verify logstash had created an index with Elasticsearch.

Reference: https://www.elastic.co/guide/en/elasticsearch/reference/1.4/_create_an_index.html

curl 'localhost:9200/_cat/indices?v'

Expected output:

health status index               pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana               1   1          2            0      5.8kb          5.8kb
yellow open   logstash-2016.01.13   5   1     768960            0     53.6mb         53.6mb

The logstash-2016.01.13 index is important. If this doesn't exist, Kibana will not be able to get past the initial Index pattern configuration page. Be sure to give logstash and elasticsearch time to ingest data (typically, no greater than three minutes).

Download Geo-IP database

This database helps translate IP addresses into Longitude & Latitude coordinates.

cd /etc/logstash
curl -O "http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz"
gunzip GeoLiteCity.dat.gz

This will extract the GeoLite City database to /etc/logstash/GeoLiteCity.dat, which we specified in the above Logstash 10-filters.conf configuration.

Fix logrotate

Modify/Create the following logrotate rules

/etc/logrotate.d/logstash

/var/log/logstash/*.log
/var/log/logstash/*.err
/var/log/logstash/*.stdout
{
        rotate 7
        size 10M
        missingok
        copytruncate
        notifempty
        compress
        sharedscripts
        postrotate
        invoke-rc.d logstash restart > /dev/null
        endscript
}

Post Configuration

Configure Index Pattern

Browse to the host address.

Login with kibanaadmin/P4ssw0rdy0usetearli3r

As mentioned above, It's important to import that data first before creating the index pattern.

1-select-index.gif

Go ahead and select @timestamp from the dropdown menu, then click the Create button to create the first index.

Now click the Discover link in the top navigation bar. By default, this will show you all of the log data over the last 15 minutes. You should see a histogram with log events, with log messages below:

2-discover.png

Here, you can search and browse through your logs. You can also customize your dashboard.

Change the time frame by selecting an area on the histogram or from the menu above Click on messages below the histogram to see how the data is being filtered Kibana has many other features, such as graphing and filtering, so feel free to poke around!

Delete Elasticsearch Index (if you need to nuke and start over)

Use the following curl command to identify the index name:

curl 'localhost:9200/_cat/indices?v'

Delete using the following:

curl -XDELETE 'http://localhost:9200/logstash-2016.01.*/'

Supplement logstash-2016.01.13 for the name of the specific index you want to nuke.