For the longest time I had the idea to implement a notification system that would alert you if someone ever logged in (or tried to login using a known password of yours).

Whether this was achieved from a previously compromised online account or just from guessing based on your personality. The objective should be that you are alerted as soon as possible. To this end I remember an app that was built by haroon@thinkst eons ago that worked on MacOS ( and took a picture of who ever disturbed the mac’s screensaver.

I wanted to achieve something simliar for Linux and then hopefully windows at some point but only when someone tried to login using a known password (in theory it could be on any login). While the current implementation doesnt take a snapshot since its only hooked up to the ssh auth, you could in theory hook it up the a systems GUI login and capture those login attempts as well.

Initial Research

After some basic investigations I found that this can be done fairly easily using a Pluggable Authentication Module (PAM) module. This was its fairly dynamic and no major system changes are required.


The final implementation was built using the pam_script module which lets you hook into PAM and call a shell script of your choice.

First we need to grab a copy of pam_script repo located @ Once downloaded compile and install the library with the following commands :

cd pam_script #Assuming you arent already in the folder
autoreconf -i
make install

Once intalled we need to configure pam by adding the line below to the following file (/etc/pam.d/sshd). This location is based on my current setup and may differ depending on how pam is configured.

auth      optional

Once we have pam configured we need to modify the script that is called when a user fails to login. On my system this is located @ /usr/local/etc/pam_script_auth. Note by default this is linked to a script located in the same folder , either modify the linked script or created a new script and update the link. I took the path of creating a new script and updating the link. You can find the demo script below.

#! /bin/sh

if [ $PAM_USER = "rc1140" ] && [ $PAM_AUTHTOK = "ThePoliceHaveMe" ]; then
  echo "the police are here" >> /tmp/alerts
  curl ''

# success
exit 0

The script itself doesnt do much aside from echo’ing some text to a file and then calling a canary token. The canary token works nicely here since you can send the alert notification with very minimal infrastructure and without indicating who has been alerted. I tend to restart the ssh service to ensure that the pam modifications are loaded but this is probably not needed


That’s it really, you can probably make this way more complicated and covert with a full pam module or by modifying the source for the different login applications. The PAM method does provide a more generic and pluggable method though. Additionaly using pam_script makes this even more dynamic and easy to implement.


Having had to recently reverse an android application that was built using the flutter framework I figured this is as good a time as ever to record my thoughts now that I acheived what I wanted roughly out of exercise.


The primary goal was to figure out what type of api the app was using and then extract the api out into a library so that it could be used for another automation project.

Aside from that I wanted to know if the app was possibly leaking any sensitive info.


The issues encountered below were basically encountered over the course of the project and were not really known to me up front.

The app was compiled with flutter and presented a number of issues listed below.

  • Code is compiled and executed using the dart vm when in release mode (this should be the case with a black box test).
  • The application does not use the system proxy.
  • The application does not use the user installed ca certs as it has a number of certs compiled into the app.
  • What ever java code is included is usually in kotlin (though this may not always be the case).
  • In the case of a black box app, it will likely have only been compiled with arm binaries and such cant be used in an emulator (unless you have the patience of a saint).

Initial plan of attack and subsequent updates

Initially I had planned to simply disasemble the apk, convert everything to classes and pull out the required http calls from the code. This is an approach that has worked fairly well before for me but because of the amount of compiled code this approach wasnt viable this time.

While I had the app unpacked some basic recon showed that the app was likely built with the flutter framework by google and suplemented by some native code which talked to a number of google services (firebase and a few others popped out based on their config).

It should be noted at this point that I didnt have a rooted device with which to test the application (most guides require a rooted phone to use the applications that force an apps traffic through a proxy).

With the disassembly not giving up any clear distinct api calls, I procceeded to try some dynamic analysis. This ended up being one of those sad yak shearing moments with me trying one emulator after another trying to find one that worked decently and let me run the application. In the end, this proved to be pointless as I couldnt get an arm image running and the binaries within the application was compiled for arm only and would not run within any of the emulators. I did not go down the route of using the arm avd images as they were just frustratingly slow.

What worked

After wasting a bunch of time trying to get the app analysed on an emulator I finally dug out an old android device went through the pain of rooting it and then followed the guide below.

If you dont feel like reading the entire guide , the tldr; is to root your device, install your proxy cert as a system cert setup a gateway device on your network and then setup a transparent proxy on that gateway. In the end this actually worked and I finally managed to intercept the calls for the app. Luckily I did not have to deal with root detection in the app or I would have needed even more time to analyze it.

Conclusion, follow up research and other notes

In the end I managed to get the api calls I wanted, but this entire process felt like a pain of note. One other means of analysis that I brifly did while trying to get the app running in the vm was to pull down multiple older copies of the app to find a version that wasnt using flutter. This sort of worked and I managed to get some extra actionable intelligence about the app but it felt like another yak shearing session as the api code seems to be generated from a definition file of sorts. Translating that to useable felt like it woudl have taken too long and would have still required me to figure out the config params from the kotlin code which was driving me crazy in its decompiled form.

Thankfully though, now that this is all setup I can easily reuse the setup for further analysis.

If I get some time in the future I would like to setup a small project so that I can try and reverse the AOT compiled dart code and see if that could prove useful.


Sometime last year wunderlist was acquired by Microsoft and while this is awesome for them it left me with a dilemma of migrating to Microsoft’s to-do application or finding an alternative.

Sadly MS’ TODO application failed to migrate all my tasks (well any tasks) from wunderlist. This left me with finding an alternative instead. Sadly since the announcement was made I honestly haven’t found anything simple + functional + fast enough to replace wunderlist.

Recently though I have started using Emacs’s org mode significantly more and more. This combined with the awesome orgzly application that was suggested somewhere on the net, meant that I finally had a solution to the wunderlist migration. The beauty of using org files for my todos + agenda is that its a simple text file which means syncing is super simple and it caters for all my needs. Making matters even better was that I could now dump the wunderlist data (which they provide in JSON) and spit out a super simple org mode file and happily continue.


The code to handle the conversion is fairly simple and is simply a matter of finding all the incomplete tasks and grouping them by the lists they were in. This code leaves much to be desired but its pretty much a dump from my ipython history so go easy on me ;P

Running the script will dump the output to stdout. Feel free to pipe this to a file or optionally change the code to write directly to file.




The basics of targetting another platform are reasonably simple. Install the relavent target platforms using rustup followed my installing the required linkers.

The steps to cross compile from ubuntu -> windows are as follows.

Install the target with rustup using the following command

rustup target add x86_64-pc-windows-gnu

Once the download has completed, the linkers as can installed with the following command.

sudo apt install mingw-w64

With that out of the way you can now compile the lib/app with another target (in this case windows) using the following command

cargo build --target=x86_64-pc-windows-gnu --release

Overall the process was fairly painless once I found the right commands, though sadly that required a bit more digging than I would have liked.



This post will give a short overview on how to deploy your app and subsequent updates using torrents. First some basics will be covered regarding how to setup a tracker followed up by creating a customized deployment client. The use of a customized client is simply to show deeper integration, though a normal torrent client would work just as well.

Deploying apps via torrents is not a new concept by any stretch and the likes of twitter have been doing this ages ago. While the project is no longer maintained, there is a more recent tracker that can be used which gives us the same result though everything is not packaged as nicely.


  • Torrent Tracker
  • Torrent Client
  • Configuration update code
  • Management Service

Tracker Setup

Grab the tracker config

curl -L -o config.json

start a docker instance

docker run -p 6880-6882:6880-6882 -v $PWD/config.json:/config.json:ro -v=5

First Seed

Create a torrent file using the URL ( http://localhost:6881/announce ) as your announce URL. Note in this case the localhost means we are testing on the same host. If you want to seed or download the archive from a different host, replace localhost with either the hostname or IP.

Once the torrent file is created ensure that the torrent is seeded (either locally or on a remote host).


You can now grab the grab the deployment archive with the original torrent file created in the previous step using your torrent client of choice.

Up till this point there is not that differs from normal torrent distribution. To make this process a bit more automated we can make use of a distribution service daemon.

The management daemon runs along side your application and watches a known end point for version changes, as soon as new version is published, the service can grab the latest torrent file and start downloading + seeding the archive. Once the archive is on the host it can continue with the usual upgrade process you may have to start running the new version by either forking to a deployment script or performing the steps internally.


The initial setup of the tracker is a once off cost. There after all that is required to distribute your archive/application is to simply seed your distribution archive and request it on each of the distribution end points either manually or in an automated manner.