Restricting Users for Kibana with Filtered Aliases | Elastic Blog
Engineering

# Restricting Users for Kibana with Filtered Aliases

Update: After some lengthy review of the Nginx configurations we discovered that it’s not possible to lock down the aliases based on the remote user due to limitations with Nginx location expressions. The tricky part is that we are unable to ensure that the remote user can only access their alias. Ideally we would want to do something like:

location ~ ^/((,?)${remote_user}-d+.d+.d+)+/_search$ {
proxy_pass http://127.0.0.1:9200;
}


But the limitation is that Nginx doesn’t interpolate variables in the regular expressions. There is not a way to ensure that the $remote_user matches their alias. Where does that leave us? At this time, we DO NOT recommend using the Nginx setup described below. This setup is still possible but instead of using Nginx to proxy the requests to Elasticsearch, you will need to use a proxy that gives you the ability to write more expressive rules around sending request to the backend (a proxy written in Node.js comes to mind). For context, here is the original article: One question we often get with Kibana is, “How do you restrict the data for different users?” Our go to answer has always been to proxy the requests through Nginx and use filtered aliases to segment the data. The typical response to this is, “Uh… Okay I will look into it.” This blog post will take that advice one step further and give you a working example of exactly what’s needed to accomplish this task. For our example, we are going to use web server logs that segment the users based on the host name. The incoming log will look something like this: { "@timestamp": "2014-02-04T11:46:16.164Z", "ip": "106.115.144.245", "extension": "css", "response": "200", "country": "IN", "tags": [ "warning", "info"], "referrer": "http://twitter.com/success/pyotr-kolodin", "agent": "Mozilla/5.0 (X11; Linux x86_64; rv:6.0a1) Gecko/20110421 Firefox/6.0a1", "clientip": "106.115.144.245", "bytes": 6091.388051980175, "request": "/terry-hart.css", "host": "astronauts.com", "responseTime": 303, "message": "106.115.144.245 - - [2014-02-04T11:46:16.164Z] \"GET /terry-hart.css HTTP/1.1\" 200 6091.388051980175 \"-\" \"Mozilla/5.0 (X11; Linux x86_64; rv:6.0a1) Gecko/20110421 Firefox/6.0a1\"" } Assuming the log data is coming in via Logstash, we can setup the following translate filter to add a user field based on the host: filter { translate { field => "host" destination => "user" dictionary => [ "astronauts.com", "buzz", "nasa.org", "gus", "space.com", "shakey", "rocketmen.org", "hotdog" ] } } With the user field added to the data, we can now setup our first filtered alias for a user using the Sense interface in Elasticsearch Marvel: POST _aliases { "actions": [ { "add": { "index": "logstash-2014.02.03", "alias": "buzz-2014.02.03", "filter": { "term": { "user": "buzz" } } } } ] } Any request that goes to /buzz-2014.02.03/_search will now include a term filter on the user field for buzz. The one gotcha for this system is that an alias will need to be setup for every user for each daily Logstash index. Elasticsearch currently does not have a feature for setting up dynamic aliases upon index creation, but the good news is that it’s coming. For now, we will need to use a nightly cron to setup our user aliases. require 'elasticsearch' require 'hashie' # Connect to the ElasticSearch cluster client = Elasticsearch::Client.new # Get all the users and map them to an array resp = Hashie::Mash.new client.search index: "logstash-*", body: { size: 0, facets: { users: { terms: { field: 'user' } } } } users = resp.facets.users.terms.to_a.map { |f| f.term } # Get a list of all the indexes and aliases aliases = Hashie::Mash.new client.indices.get_aliases aliases.each_pair do |index,aliases| # Match the all the Logstash indexes and get the Logstash # date stamp from the index name. matches = /logstash-(d{4}.d{2}.d{2})/.match index if matches # Loop through each user and check to see if the index exists # if it doesn't then create the new alias and add a term filter. users.each do |user| aliasName = "#{user}-#{matches[1]}" unless aliases.aliases[aliasName] puts "Creating alias #{aliasName} for #{index}" client.indices.put_alias index: index, name: aliasName, body: { filter: { term: { user: user } } } end end end end The next piece of the puzzle is setting up Nginx to serve the Kibana interface with basic auth and to proxy the logstash-* requests to the user’s aliases. There is a sample Nginx configuration in the Kibana Github repo that we will use as a starting point. We need to add basic auth to the top of configuration along with modifying some of the rewrite rules to use the filtered aliases and user specific indexes. You can view the modified file here. The trickiest part to setup is translating the logstash-* requests to the user’s aliases. Kibana will often send requests like /logstash-2014.02.04,logstash-2014.02.03/_search, which will need to be translated to /buzz-2014.02.04,buzz-2014.02.03/_search. Nginx doesn’t have a simple find and replace feature, so we need to dust off our hacker skills and setup a recursive rewrite rule to make the translation for us.  # Recursively change Logstash prefixed index names to user prefixed aliases. # This will process until the logstash-YYYY.MM.DD pattern disappears location ~ ^/([^*]*)logstash-(?<date>d+.d+.d+)(,?[^*/]+)*/_search$ {
set $part1$1;
set $part3$3;
rewrite ^.*$/${part1}${remote_user}-${date}${part3}/_search last; } # All request to kibana-int also need to be proxied to an unique index per user. location ~ ^/kibana-int/(.*)$ {
set $part1$1;
proxy_pass http://127.0.0.1:9200/kibana-int-${remote_user}/${part1};
}