Thursday, March 26, 2020

Configuring DNS - Web Servers with Subdomains

So I first purchased the domains phub.info and slabj.com for testing purposes. I changed the DNS settings to direct them to servers I created on DigitalOcean. I am connecting phub.info & slabj.com to LAMP stacks with Wordpress. The setup includes 3 main parts. Configuring GoDaddy where I purchased the domains to use custom 'Nameservers' such as ns1.digitalocean.com, ns2.digitalocean.com & ns3.digitalocean.com. 

Then I changed the A name record to @ 'hostname' to direct to the server I want to use and the CNAME of www to match the hostname of each site respectively so that people can visit my website by typing the site name or add a www to the front. The sites pointed fairly quickly but this process can take from 24-48 hours to take effect. I am going to add a subdomain to slabj.com so that if you goto promotions.slabj.com you can see special promotional offers for the month.

To add a subdomain within my Apache server configuration I goto /var/www where my html folder and wordpress folder are and I add in promotions as a folder. Within that folder I create a sample index.php file to test out the subdomain.



Then within the DigitalOcean DNS configurations I add in promotions as a new CNAME.
Then I add in the subdomain CNAME record. Next I configured the apache server site-available file. I made a copy of the default configuration and added in the specific routing needed for the subdomain. And then I enable the new test.com.conf file with 'sudo a2ensite test.com.conf'.


I now disable the original default configuration with 'sudo a2dissite 000-default.conf'. Now promotions.slabj.com brings up the content I had specifically put in the apache folder for promotions and I can add in special offers that are not on the main site with a specific subdomain.


Wednesday, March 25, 2020

Create a LAMP(Linux, Apache, MySQL & PHP) stack with Wordpress

In this example I create a LAMP stack and add Wordpress along with Node.js for additional customization later on. The LAMP stack includes Linux, Apache, MySQL and PHP. Once that is setup I loaded Wordpress which is equipped with PHP to run scripts and interact with an SQL database to store and retrieve data The Apache backend is the most efficient for serving SQL content so that is why this stack is so powerful is that we have 4 major elements which are each the best for their own assigned specific tasks.

I am creating this in the cloud on a droplet using DigitalOcean. From my fresh server I started with updating Ubuntu and then connected to the command line interface through SSH on my work machine. From the command line I installed Apache and configured the firewall to initially accept HTTP and HTTPS traffic. Later I will add an SSL certificate and make sure all HTTP requests are upgraded to HTTPS for security.

Apache2 on Ubuntu up and running
Here I have now confirmed that my Apache server is up and running on my virtual machine loaded up with Ubuntu in the cloud. Next I will include the MySQL server. The main steps are that I create a 'wordpress' database, a 'wordpress user' and assign a password to that user as well as assign appropriate read/write access. Initially root uses the 'auth_socket' to authenticate but I add in a strong password before connecting everything. 

mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;
+------------------+-------------------------------------------+-----------------------+-----------+
| user             | authentication_string                     | plugin                | host      |
+------------------+-------------------------------------------+-----------------------+-----------+
| root                                                       | auth_socket           | localhost |
| mysql.session    | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| mysql.sys        | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost |
| debian-sys-maint | *BF4801BAF768EFD5C026C28EF9EAC1F56A063511 | mysql_native_password | localhost |
+------------------+-------------------------------------------+-----------------------+-----------+
HERE I HAVE THE INTIAL AUTHENTICATION METHODS. NOTE THAT ROOT BEGINS WITHOUT A PASSWORD AND STARTS WITH AUTH_SOCKET. 


I at this point have most of the 3/4 of the LAMP stack setup. I now need to include PHP and configure the server files accordingly. In the mod_dir file I make sure to include index.php at the front of the module Directory Index so that the server brings up index.php before index.html. I at this point restart the Apache server and check the status before continuing to make sure the configurations are correct.

root@ubuntu-lamp-1:~# sudo systemctl restart apache2
root@ubuntu-lamp-1:~# sudo systemctl status apache2
 apache2.service - The Apache HTTP Server
   Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
  Drop-In: /lib/systemd/system/apache2.service.d
           └─apache2-systemd.conf
   Active: active (running) since Tue 2020-03-24 18:18:40 UTC; 6s ago
  Process: 21869 ExecStop=/usr/sbin/apachectl stop (code=exited, status=0/SUCCESS)
  Process: 21874 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS)
 Main PID: 21891 (apache2)
    Tasks: 6 (limit: 1152)
   CGroup: /system.slice/apache2.service
           ├─21891 /usr/sbin/apache2 -k start
           ├─21894 /usr/sbin/apache2 -k start
           ├─21895 /usr/sbin/apache2 -k start
           ├─21896 /usr/sbin/apache2 -k start
           ├─21897 /usr/sbin/apache2 -k start
           └─21901 /usr/sbin/apache2 -k start

I now did a quick check on my PHP CLI:

root@ubuntu-lamp-1:~# apt show php-cli
Package: php-cli
Version: 1:7.2+60ubuntu1
Priority: optional
Section: php
Source: php-defaults (60ubuntu1)
Origin: Ubuntu
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Original-Maintainer: Debian PHP Maintainers <pkg-php-maint@lists.alioth.debian.org>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 12.3 kB
Depends: php7.2-cli
Supported: 5y
Download-Size: 3160 B
APT-Sources: http://mirrors.digitalocean.com/ubuntu bionic/main amd64 Packages
Description: command-line interpreter for the PHP scripting language (default)
 This package provides the /usr/bin/php command interpreter, useful for
 testing PHP scripts from a shell or performing general shell scripting tasks.
 .
 PHP (recursive acronym for PHP: Hypertext Preprocessor) is a widely-used
 open source general-purpose scripting language that is especially suited
 for web development and can be embedded into HTML.
 .
 This package is a dependency package, which depends on Ubuntu's default
 PHP version (currently 7.2).


I then created an info.php file and put it within the directory serving my webpages to see the PHP information for my server and to verify that PHP scripts are executing correctly.

PHP info page
Now I can see that PHP is running and that completes the LAMP stack initial setup. I now have a stack on my server that includes Linux, Apache, MySQL and PHP.

Additionally here I added Wordpress. And for site security I added a self-signed SSL certificate for encrypting traffic. In one line I create a certificate: 


sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout
/etc/ssl/private/apache-selfsigned.key -out
/etc/ssl/certs/apache-selfsigned.crt

The command above creates a X.509 cert that will be valid for 365 days and is 2048 bits. I used recommended settings from Cipherli.st by Remy Van Elst. And the ssl-params.conf file is the following:


Next I configured the virtual host to redirect all HTTP traffic to HTTPS. Within Apache I did this by adding in the redirect to the default.conf file I created. With a few additional commands I have now enabled my SSL Virtual Host and so my website here is only accepting secure client side connections. 

Now for WordPress I went ahead, downloaded and extracted the tar files and then added the site files to the directory where I am serving content from. Now when I load my site I am greeted by the WordPress setup page. And in just a few minutes I now have WordPress installed and am ready to create content for my WordPress site on my LAMP stack.



And finally to test that everything is working correctly I created a sample post and it works!


Great! The LAMP stack with WordPress works!

Thursday, March 19, 2020

Using GraphQL Clients

This will build upon the previous setup and creation of an express GraphQL cloud server. The server has a mounted HTTP API endpoint using GraphQL and express. The Ubuntu virtual machine is in the cloud and there is an initial data set to test with.

I send my initial request via curl as follows:

curl -X POST \
-H "Content-Type: application/json" \
-d '{"query": "{ hello }"}' \
http://138.68.62.109:4000/graphql/

I now get the following response:

{"data":{"hello":"Welcome to your new Express GraphQL Server Jason!"}}%

There are a few different ways in which the data can be accessed. Above I used curl. In the previous article I showed how the GraphiQL interface can be used and here I am showing how this can even be accessed from a regular browser. I just navigate to the HTTP API endpoint I have created and enter the following:



And from this query above the following data below is returned:


So now everything is working with hardcoded values but next I will show how to use variables in client code to construct more complex and dynamic requests. I can actually just use HTTP requests as the transport layer supporting my requests and then invest the time to setup my GraphQL client to handle more variables as the data becomes increasingly complex. For simple queries this is the best and fastest method to get up and running.

API's (How to run an Express GraphQL Server)

This example builds off of the other initial GraphQL setup and initialization so I will begin from the end of that article. I already have an Ubuntu virtual machine in the cloud with Node.js & GraphQL. The cloud server initial setup is done at this point. So now, I am going to continue configuring and modifying a few things to turn this virtual machine into an Express GraphQL Server.

I installed express from the command line with: 


Next I modified the original server.js file I had created. Here I now have a new module which is 'express'.  This will help me to be able to run a webserver and then will mount a GraphQL API server on the HTTP endpoint I have assigned of "/graphql" on my Ubuntu cloud virtual machine.



And now things are beginning to get more interesting. I now have a visual interface with which I can send queries to the API HTTP endpoint. So here when I send a query for '{ hello }' the contents of the variable within my function are returned in GraphQL syntax which works in a very similar manner to JSON. One additional note is that this data exists within an HTTP API endpoint so it can also just be called directly from the URL as:

http://'theSpecificIP':'thePortNumber'/graphql?query={hello}.

Using the GraphiQL interface to issue queries.

Setup & Initialization for - GraphQL using Node.js

The title here is getting started with GraphQL so this is just a primer for using GraphQL in Node.js in the cloud. I first created an Ubuntu cloud server, installed Node.js and GraphQL. Then I was ready to load up a server file and fire up a node.

For this introductory example I am going with a classic "Hello World!" program which just shows how to setup and initialize a basic API request. GraphQL does get more interesting and I will show more complex queries in later posts. The goal here is just to show how to get up and running with querying API's using GraphQL.

First I created a new directory using the CLI(command line interface) that I am connecting to via SSH from the terminal on my home work machine. Once I went through all the server setup and installations it was time to test out the system. For this I created a server.js file and added the contents seen below. Here a schema is defined with a query that will return a string with the value stored in  '{ hello }' within var root. Then I pass the response and log it to the console in the terminal.


To clarify a bit further, the code above is doing a few things. A schema is built to define the type of query being called from the API endpoint. Then our response is returned as a string that is saved in a variable that can be queried with GraphQL. This is a very simple server starting point but it shows how to get up and running with GraphQL. 

And the initial response from the first GraphQL query below confirms that string of data was successfully returned from my query.

Hello World! from GraphQL using Node.js



Tuesday, March 17, 2020

Simple XMLHttpRequest (AJAX)

Here I am illustrating how to do a simple XMLHttpRequest to do an asynchronous call to a file. I first created an index.html page where I created a constructor of "xhr = new XMLHttpRequest(); ".  And then on another simple text file. I put the text of "AJAX - XMLHttpRequest!". Then I defined an if else statement for what to do in case of an error (404) or success (200) message. Below I am saying that if there is a 200 success message and my document is found to then create a window pop up alert with the contents of the file. This could be used to perhaps pop-up a verification code or something else that I want to load asynchronously in regards to the rest of the data being loaded. 

This creates an alert box that displays the contents of the dom.txt file.
Next I create an error message to be displayed in the console if the file is not found. And finally I finish defining the xhr options in that here I am getting an object called dom.txt and that I want it to load asynchronously. 


And now when I load the page and the Javascript runs it does the AJAX(Asynchronous Javascript and XML) call and returns the contents of my dom.txt file to the alert box that is popped up on screen. And that's it. That is in its most simplest form an AJAX call.

AJAX call for a text file's contents to put in an alert box.





Creating Website Cookies For Return Customers

Session cookies are small pieces of data that a website sends out to user's web browsers. They can be used to store stateful information or to record the user's browsing activity. This is usually used by sites to either ensure security by making a user's account information only visible when they are logged in. I can make the cookies and access expire upon logging out. Alternatively, I can use the browsing activity records to do targeted advertisements or offer targeted discounts to customers.

For this simple cookie I am calling "site_cookie_1" I want to track returning customers. The cookie also has a built in expiry of one day upon which it will in essence self destruct.


Next I wrote in PHP to reflect to the screen whether the user has a cookie or not. Once the cookie is set there is a visual confirmation with the cookie name echoed back to the user. This can also be sent to a text file for logging purposes.


So now when I first visit the cookies.php page I am greeted with a warning that I have not yet had any cookies set on my account.


Now I reload the page and the cookie script runs. Now a cookie is set and the value is reflected to the screen.

And here reviewing the application details I can see that I indeed have a site cookie named site_cookie_1 with the value of return+customer coming from the server which in this case is localhost and it has an expiry date of one day from now.

Return customer cookie is set.

Automated login with PHP and cURL


Using PHP and cURL to login to an account automatically can be quite useful. Sometimes I may need to access a user's information from another website to populate a form, facilitate a user sign up or to initiate some other feature. In this case when the user provides credentials I am able to use cURL with PHP to do an automated login to retrieve the user's information. 

In the simple login form I have created below to test out this script there is a username and password field at login1.php. Upon submitting the form, the process.php file handles the querying of the database and confirms whether or not a user is to be authenticated. Since here I am automating the process I can bypass the manual login1.php form and just send the $data array to the process handler file of process.php.




In this first screenshot below you can see the first page a visitor will arrive at. They will be first asked for a username and password.


In this process since I am skipping login1.php and going to process.php I wanted to show what happens if you just try to access the process.php file. You would see the error below which is telling you that there is an undefined index for the username and the password meaning the program doesn't know what to do without credentials.




Here is the interesting part, where I can use PHP with cURL to automate the process. The $data array is sent as a POST to the process.php file. Now when it runs with the credentials it logs in and reflects back that the testuser1 was able to successfully login with their username and password and this was all done in an automated fashion.

Logged in with PHP and cURL






Saturday, March 14, 2020

Configuring an Nginx Server as a Reverse Proxy for an Apache Server

For this example I created and configured an Ubuntu virtual machine in the cloud. I next installed Nginx and Apache servers on the machine. Finally I configured Nginx as a reverse proxy for Apache. The purpose of this is that different servers are more efficient individually at delivering specific types of content. Specifically Nginx is better at serving static content and Apache is better at serving the backend data as is found in SQL servers.

The process has quite a few steps. Once I had my Ubuntu server in the cloud I then proceeded to update all the software and install Nginx. A quick curl command of curl -I localhost shows that my localhost is now Nginx. So at this point I was now able to proceed to the rest of the configuring.


Nginx is now up and running as my localhost.

Next I installed an Apache server on the virtual machine which will be serving the backend content from a database. Since Nginx is running on port 80 and Apache wanted to start on port 80 as well, I went into the configuration file for the Apache ports and changed the listening port to 8080 and specifically for ssl_module and mod_gnutls.c I put the listening port to 8443.

Port configuration file for Apache.


After restarting the Apache server and getting its status I can see the following message that lets me know a few things about the Apache server including its status, ID and tasks. Here I am most concerned with the system running successfully using my configuration file I just modified.

Apache server is up and running.


Next I want to confirm the web page status from the terminal. I sent a quick curl command to localhost:8080 and can now confirm with a 200 success message that Apache is the server on port 8080 for the localhost of the Ubuntu virtual machine.

Apache is now live and serving content.

Next I need to configure Nginx as a proxy. Here is the configuration needed for the Nginx server.

Now I restart the Nginx server and when I navigate to the URL of my cloud server I get an Apache page. Upon further inspection of the Network you can see that the server is nginx/1.14.0 (Ubuntu). 
  1. Server:
    nginx/1.14.0 (Ubuntu)


Nginx as a reverse proxy for Apache.
So now I have successfully configured Nginx as a reverse proxy for Apache and can build more content but am ready to scale and run at maximum efficiency for delivering static HTML content and SQL data at the same time.


Thursday, March 12, 2020

Python - Network Programming - Cloud based reverse shell

Creating a reverse shell to do work or modifications on a remote machine can at times be essential. Many years ago I remember working in sales or as an analyst and not understanding networking and systems. This process seemed magical as someone could remotely start opening files, updating printers or just troubleshooting any machines from what seemed to be a far off distant land (the server room).

Now I have learned a lot since those days and here I want to show in a simple way how a reverse shell is created which is just a connection from one computer to another. In essence I can do anything with the remote machine as if I just plugged in a monitor and keyboard to it. 

There are two main parts in creating this type of connection. First I have a server and then I have a client. Each will be using a separate Python file for this process to work.

I find that the easiest way to understand large or complex systems is to understand the individual components that make up the greater whole. First I created a socket which connects the two computers. I tell the system to create a connection on port 9999 to be listening for our incoming connection.

The creation of a socket which I will be connecting to.

From here I am able to bind the socket and start listening.

Binding and listening for connections.


I created Ubuntu servers in the cloud to test this out. Here the client.py file is told which server to connect to and on which port.

The beginning of the client side connection.

Here is where it gets interesting. I define the actual pipes where the data will be flowing and the process flow of encoding and decoding.



Now that I created all the coding to do a reverse shell I wanted to be able to see how I would implement this completely remotely. Meaning I am going to create an Ubuntu server in the cloud that I am remotely connecting to via SSH and I created another Ubuntu server that will serve as a client's machine that is also in the cloud.

Once I was connected to my server and client machines I uploaded the python files and started the "server.py" file on the main server. The output says "Binding the Port 9998" and I know that it will now be listening on port 9998 for incoming connections. On the other machine I then started the "client.py" file. Instant success!

On the left the client.py file is started. On the right a connection is received from the other machine and I have a command line.
I am /root.


As a simple test I input the command "ls" to list the files. Since this is a fresh server for this example the only file in the working directory is "client.py" and then the command prompt shows that I am in as /root.


Reverse shell in action.


For this example I just listed the files and then did an echo command of hey. As you can see the "hey" is echoed on both machines. From here I have a command line that can now remotely control a computer from the terminal and I can do whatever modifications or file transferring is required.

Python Automation - Scheduling tasks for specific days or timeframes

Python is a powerful language that helps with automating tasks from file backups, running commands or transferring server files.

In this script is a simple example of how Python can be used to schedule a task. For illustrative purposes the function here being called is wait() which in essence does nothing but literally sleep for 1 second as it waits to print ("jason"). The function can tell the system to do a backup of all user's files, or perhaps check inventory levels or to send weekly reports to certain people. The possibilities are endless but the building block is here and the essential and great part is that I can create a python program to automate tasks and routines to run once every few seconds or once every week.

Simple Python scheduler

Automated Exploitation of a Bluetooth vulnerability that leads to 0-click code execution

This blog post covers an interesting vulnerability that was just discovered earlier this year and an open source free tool that was created ...