Thursday, April 30, 2020

User management application using Node.js and Redis

Here I created a user management application using Redis and Node.js. Here we can simply add and search through user detail records.

The application consists of a few components. There is the main app.js file which controls much of the application. And I used express for Node.js with handlebars to add users, create details and search through the users. 

In this adduser.handlebars file I create the "POST" method with an action that will send the information below in a post to  /user/add which will create the new user in Redis.



Next in the details.handlebars file below I set everything up to handle the display of the records that are retrieved from Redis.



Then I have a searchusers.handlebars file which handles the search functionality.



The main.handlebars file handles the HTML output and is quite similar to any other HTML page with the interesting differentiation of how the body content is handled. The triple curly brackets tell handlebars to display the information from Redis within this container.


And now that I've gone over the files that connect with the main app.js file perhaps it will help clear everything up a bit to focus on how the directives are being initiated. Below I am showing the different constructor variables I setup. This is a Node.js app using express, express-handlebars and I set up a few other parameters that are required.



With everything setup it's simple to create the Redis client from here.


I then set the port and initialize express within the app. I also created some middleware for the body parser, a view engine, setup the MethodOverride function. And then this next part I found interesting because here I am taking the user input from the HTML page and feeding it to Redis in a similar manner as if I as the admin were manually adding users from the command line. Next here is how the search processing is handled. The application sends post request to /user/search and if the object does not exit an error message saying "User does not exist" is returned. In the case that the object exists the instructions below are to render or to show the users the details for a particular obj. In this case I am just searching by user ID's, but this can be run with any of the fields but a user ID is usually the most unique. There may be two John Smiths within the system but there can only be one user001 and one user002. 





Next in the "Add User" process I setup the application to create a post request to /user/add with the fields the user is required to enter: id, first_name, last_name, email and phone number. If there is an error it is logged to the console. If everything posts correctly then a new user is created in Redis and the user is redirected to the home page.





Now I can search through the records and add users from this Redis/Node.js application that I can access from the internet.






Tuesday, April 28, 2020

Cron job - Daemon - Shell script to send email alert to admins when RAM utilization hits critical levels

In this example I am creating a daemon to run a cron job. Cron jobs are scheduled tasks that can be setup to run in an automated fashion on particular, days, weeks or even yearly quarters. And daemons are silent processes that are running in the background constantly in an alert state ready to spring into action.

First, I checked to see how much ram is free on this server at any given time. The server is an Ubuntu LAMP stack on a cloud. In this particular instance 265MB of ram are free.


This command returns the exact value for comparisons and is what I will plug into my program later on:


So now that I know my parameters for where I want my RAM to be running and when I want to be emailed about any alarming spikes in usage. The alerts.sh program I created below sends emails to tester@slabj.com whenever the free ram is less than or equal to 267MB. I set the threshold really low to test my script.

Cool. It's working now and I got an email telling me that the RAM is low. That was expected since I set normal usage as low to check my script.


And here is the email in a little more detail. You can see that I am having "root" email me whenever the RAM free size is "LOW" which is an arbitrary parameter that can be set however I choose.


Here I'm having the script echo back what it is doing to the console. This is for illustrative purposes as daemons are not to be seen nor heard from unless they are taking action.


With the addition of a while loop, this now becomes infinite. The process will start running now and won't stop unless I tell it to.

To make it more clear here I am echoing responses for either state the machine is in.



As you can see, with the Daemon running constantly the alerts are constant. This is not necessarily ideal. I could easily just exclude the echos and have my daemon run all the time without the average user noticing. However, once there are many daemons I don't want them all running in the background eating up memory space or using up CPU power.


Fortunately Linux has a cool tool just for this particular situation. Cron jobs are scheduled tasks that run at assigned dates and times. In this instance I scheduled a Cron job to check the RAM usage every Tuesday at 9:20am. Ideally you'd want to check at different intervals but since I initially created this on Tuesday a little after 9am I decided it would be wise to set the first Cron job to run in the next few minutes to test it out. 

And that is it in a nutshell. I now have a program that is automated and chronologically scheduled to run at specific intervals to check system RAM utilization. Additionally, the system sends email alerts to the appropriate individuals if any administration intervention is needed to keep everything running smoothly.

JWT Tokens with Node.js and Express


In this example I show how to create access tokens for an application using Node.js and express. JSON Web Tokens are a standard type of token that is used widely to certify user identity by a server before sending data back to the client machine.

First I sign JWT with a secret token from the client. Then the server verifies the token and reads the information if the token is valid. I also created the ability to have the tokens expire or be refreshed as is needed. 

For the initial setup I just needed to make sure Node.js and express were installed and up to date. Then I added a package called nodemon to monitor for any changes and to react accordingly. To handle the authentication in a secure manner I also created a separate server for handling the main request and one that is used purely for authentication.


Additionally, in a rest file I am creating the parameters for the /posts and /login requests. The application type is json hence the name JSON Web Tokens.


The initial test to the server returns a 200 code along with the "accessToken" so I know everything is working up to this point. The access tokens are being generated.


I next modified the code to return a refresh token so that the initial token can be expired and I can give users new tokens to continue to verify user identity but I don't just have one token that can be used over and over. This is for safety too, because it helps me lock the server in essence every time a token expires.


Ok, so now that I have tokens and refresh tokens generating, I need to use those to access data. I now am getting back specific user data based on their token. Here the first post is returned for the username "Jason".


Here below you can see that a token will expire and eventually the server is locked again. Any additional requests will return a "403 Forbidden" error.



This process is actually only part of the equation. JWT tokens are used in tandem for additional security with user passwords. It is in interesting process because it allows granular control of user access to server data in a way which access can be granted, extended or restricted in an accurate and efficient manner.


Wednesday, April 8, 2020

Gathering and saving eSignatures from an HTML form with Ajax, PHP & jQuery

For this example I show how to create an e-signature form where users can sign a form and then the signature is sent back to the database or server. I used HTML to construct the form and javascript to do AJAX calls with jQuery and the help of PHP. Below is the Javascript code that gets the image from the first location and saves it with the use of another php file called 'save_sign.php'.




Here in 'save_sign.php'  I get the image, decode it and save the signature snapshot image as a .png file in a folder I created called doc_signs.







Here I have the output being reflected from the database. Every time a new signature is saved they are put in that document signature folder. So here for showing the concept, I am reflecting the signatures as they get saved right back onto the page. In a production app from here the only thing left to do is to know where the specific project wants to route the signatures. It may suffice to just save the document signatures along with the other customer sign-up or purchasing data.


Friday, April 3, 2020

Setting up and configuring an entire email server in the cloud on Ubuntu(Linux) for a website

So for this example I setup an email server in Ubuntu on a cloud for a testing website I created. The process builds upon previous examples. So for this I started with installing Apache and PHP on the virtual machine to begin the installation and configuration of all the additional components.

First I installed Postfix which serves as the MTA(mail transfer agent). This is the software responsible for delivering and receiving the emails. Then I installed Dovecot as an MDA(mail delivery agent) which then more specifically delivers the emails to/from the mail server. At this point I now have an IMAP/POP3 email server and am going to install SquirrelMail on the email server to have a simple interface to manage emails on my server.





Here I added postfix to the server above and below I am confirming the service status. Everything looks good for the next steps.


Below I now have confirmation that Dovecot is active and running. The email server is up at this point. 




I did a code review on SquirrelMail to make sure the configurations are correct for my server. I just had to add a few read/write access modifications to the server for this to work correctly. Also for now since this is a private email service random users can not sign themselves up. I as the admin create users directly on the server as you would normally have for a company's external email login for employees.






I then tested to confirm that I can receive emails from my gmail account and I am also able to send emails to other users with accounts on my server. I created a test account of 'tester' and another of 'squirrelone'. Their respective email addresses are now tester@slabj.com and squirrelone@slabj.com. Here 'squirrelone' has received test messages from 'tester' and myself from my gmail address.






And that's it. I have an Ubuntu cloud based email server correctly setup and configured for a specific domain I created where people or even other servers are now able to send and receive emails.

Wednesday, April 1, 2020

Testing TCP/UDP baseline functionality with Netcat & Curl through CLI(SSH)

Here I am showing how to setup a simple Netcat listening server to test baseline functionality of TCP/UDP connections. I create a simple document and save it as technical.txt that I will send to my server I am testing.

Here I create the simple document.

On the server for phub.info I setup a Netcat listener at port 8888 that is ready to receive the specific document. 



Now from the 'HQ-cloud-green' server I get ready to receive the document with another Netcat server.


To confirm receipt of the file I open it and view the contents from the terminal.
Great so now I can send messages back and forth but I want to test things a little further. I want to know how this connection port and the data being sent through it will be interpreted by browsers. With Netcat I can create a simple HTTP server to serve up content to browsers to see if my server is configured correctly. Here I pulled the data from the URL with cURL and you can see the HTML for the test page I am serving from the Netcat server.


And now when I visit port 8888 at my IP for phub.info (this is just a cloud droplet I spun up and destroyed after this example) I get the following output below and can confirm correct functionality of TCP & UDP.


Using a GPG Keychain to encrypt emails and any text messages


One of the most straightforward methods for integrating PGP keys with e-mails is with Apple e-mail using GPG Keychain. You just put in your e-mail, generate a key and add your e-mail as well as a password.

Now whenever you want to send a message you just tell the program to encrypt with the key of your choosing and you get the results below.


Here is a test message.


I select to use a key I named 'newkey'.



Here is the message above encrypted with the PGP key.


Now with the key configured to my e-mail all I have to do is select and choose to decrypt.

Automated Exploitation of a Bluetooth vulnerability that leads to 0-click code execution

This blog post covers an interesting vulnerability that was just discovered earlier this year and an open source free tool that was created ...