Go get me some cookies


Introduction:

There isnt really much to intro with this post , I am simply going to demonstrate how to use cookies with Go’s net/http lib.

This in itself is super easy (unless you are a noob like me).

Onto the code :

    package main

    import (
      "net/http"
      "net/url"
      "net/http/cookiejar"
    )

    func main(){
      jar,_ := cookiejar.New(nil)
      client := &http.Client{Jar:jar}
      client.Get("http://google.com")
      formData := url.Values{"username": {"myUsername"}, "password": {"myPassword"} }
      client.PostForm("http://google.com/login", formData)
    }

First of the imports used are under the assumption that you are posting to some service as if you were logging in. If you do not need this then you can kill line 5 and anything url related.

The first import is the standard net/http for making requests , next is the net/url import which is used as a easy way to represent form values when they need to be submitted. Lastly is the net/http/cookiejar which contains the implementation of a cookiejar which will be used to store our session cookies while making requests.

On line 10 create a new instance of the cookiejar passing in no options as this will get you started 99% of the time. Once we have a new cookiejar object we pass that as a parameter to a new instance of the http.Client.

Under normal circumstances requests such as GET/POST etc will be made directly with the http.Get helper method. But because we want to resuse the cookies between requests we need to create an instance of the underlying http.Client object.

Thats all there is to it really , in summary create a cookiejar and ensure you use the same http.Client object for all your requests.

An analysis of a Chrome extension:


Introduction:

In this post I hope to give you an introduction into the process of analyzing a Chrome extension. To demonstrate this I will be analyzing a Chrome extension called hola unblocker. While the article will focus on reversing and analyzing the hola unblocker extension the process can be generalized to analyze most Chrome extensions.

The high level structure for analyzing an extension is as follows:

Getting started:

Tools:

In order to successfully analyze an extension, you will need the following tools:

  • Good Text editor. Since a fair amount of the analysis requires reading code a good versatile editor is useful. I personally used sublime text 2, but notepad++ or vim could work just as well.
  • Archive Manager. The archive manager will be needed to extract the zip file contents. Personal tool == 7zip, but anything that can handle zip files (including windows explorer) is ok.
  • [Optional] Node.js. This is purely optional and I use it to extract function names from scripts, if you want to use the same script you will need to have node.js installed.
  • [Optional] Wget. Used to grab a copy of an extension from the Chrome store without requiring the extension be installed.

Chrome Extension Internals:

Chrome extensions are usually built either with HTML + JS (most common extension type) or native code. The native code can usually present A fairly hard analysis as it would require the reversing of a compiled dll. The HTML + JS route on the other hand is a bit easier and thankfully follows a fairly structured means of execution.

Getting the extension :

So to get this started the first thing that is needed is to get hold of the extension. This is not much of an issue when it is provided via direct link, but in the case of a Chrome store extension (which is where most extensions are hosted) getting the extension is a bit tricky.

Note: For obvious reasons if the extension is malicious we would not want to install the extension. So I am going to gloss over analyzing an already installed extension.

If we start by searching the Chrome store for ‘hola unblocker’ we will redirected to this URL (https://Chrome.google.com/webstore/detail/hola-unblocker/gkojfkhlekighikafcpjkiklfbnlmeio). Note: At times the search didn’t work for ‘hola unblocker’ and I had to use Google search instead to locate the link.

From this URL we can extract the ID of the extension. The ID is the last part of URL before the parameters which in our case would be:

ID: gkojfkhlekighikafcpjkiklfbnlmeio

Once we have the ID we want a direct download link for the extension. In order to get direct download link, we insert the ID we obtained in the previous step into the URL below where is says <ID Here>.

Template: https://clients2.google.com/service/update2/crx?response=redirect&x=id%3D<ID Here>%26uc

Result: https://clients2.google.com/service/update2/crx?response=redirect&x=id%3Dgkojfkhlekighikafcpjkiklfbnlmeio%26uc

This will allow us to perform a direct download of the URL without the need to install the extension.

Unpacking the extension:

Once the extension has been downloaded rename the extension from .crx to .zip since the extension is simply a compressed zip file. Extract the file which we downloaded with your zip extraction tool of choice.

Extension Structure:

Once extracting the extension, we will be faced with listing similar to the image below. Keep in mind that the folder structure may change depending on the developer’s implementation. In any case, a manifest.json file will always exist in the extensions root directory.
extensionListing

Firstly, the entry point for any extension is the manifest.json. This files allows Chrome to understand how the extension is laid out, what files are included and where they should be loaded.

Next, the background.js file is where any non UI code is usually located. The background.js will be loaded into memory when the extension is loaded by Chrome and on any subsequent reloads. Keep in mind that the background.js file does not execute any kind of startup function and is interpreted like a normal JS file, so as a developer all code needs to be wrapped with functions to prevent auto execution. A background.js file is usually found in an extension that needs to access the Chrome extension API. By default this access is blocked by scripts that are loaded in UI areas such as the pop-up and content areas.

The content script (usually called content.js) is a JS file that is loaded into every page that matches a pattern provided by the author of an extension. These patterns make it easy to spot nefarious actions in an extension. For example if a plugin advertises that it allows you to see how many unread gmail messages you have then it should not have a pattern that gives it access to yahoo.com.

Content scripts are very powerful as they allow extension authors to access any DOM elements that are on the page they are hooked into. A point worth noting is that if an extension author makes use of native API the permissions shown to the user indicating the pages a content script would be loaded into are hidden. Instead the extension will request data to all websites (which depending on the type of extension may or may not be suspicious).

Finally the popup.html is what is shown when a user clicks on the extension icon in Chrome. This can optionally load a JS file of its own. Optionally an extension may sometimes include an options.html with handles the saving of options for an extension. This sometimes provides and interesting insight to where and what data is being stored.

The Dive:

Before we dig into this I should mention that I suspected that the extension was using proxies to re route the traffic and this analysis was just to see how exactly they implemented it.

When loading the hola unblocker manifest.json we see the following data:

 {
    "update_url":"http://clients2.google.com/service/update2/crx",
    "homepage_url" : "http://hola.org/unblocker.html",
    "permissions" : [
       "proxy",
       "webRequest",
       "webRequestBlocking",
       "<all_urls>",
       "storage",
       "tabs"
    ],
    "version" : "1.0.159",
    "background" : {
       "scripts" : [
          "scripts/jquery-1.8.3.min.js",
          "scripts/background.js"
       ]
    },
    "name" : "Hola Unblocker",
    "icons" : {
       "128" : "images/icon128.png",
       "16" : "images/icon16.png",
       "48" : "images/icon48.png"
    },
    "description" : "Access blocked content from any country in the world, free.",
    "browser_action" : {
       "default_popup" : "popup/popup.html",
       "default_title" : "Hola Unblocker",
       "default_icon" : "images/icon19_gray.png"
    },
    "minimum_Chrome_version" : "18.0.1025.168",
    "manifest_version" : 2
 }

Manifest break down:

Initial URLs

"update_url":"http://clients2.google.com/service/update2/crx",
"homepage_url" : "http://hola.org/unblocker.html",

The initial URLs that we spot at the top of the manifest are fairly self-explanatory. But to break it down the update URL is the location that will be checked to see if there is an updated required. The homepage URL is a link to the home page of the company / developer. The home page is optional but can provide additional information about the developer behind the extension (in case we only ever looked at the Chrome store).

Extension Permissions

"permissions" : [
   "proxy",
   "webRequest",
   "webRequestBlocking",
   "<all_urls>",
   "storage",
   "tabs"
],

The extension permissions dictate what features of Chrome the extension has access to, from a developer perspective this should be kept to a minimum to limit the damage your extension can do if compromised.

When looking at the list of permissions from the top down, the author of extension is requesting access to Chromes proxy ,webRequest(Blocking) ,Programmatic Injection ,storage and finally tabs. The explanation of each of the permissions is explained in the pages linked to each permission.

The permissions I was most interested in was the proxy permission which is how the extension handles the traffic filtering. While the <all_urls> permission is a little worrying there is no evidence yet to suggest that it is being abused. The storage and tabs permissions are usually linked to saving the users options and opening a new tab once the extension is installed to let the user visit the authors home page.

At this stage there is no information explaining the inner workings of the extension and the analysis of the permissions is based entirely on information provided by Google as a recommendation in their documentation.

HTML Pages and extension Meta data

    "version" : "1.0.159",
    "background" : {
       "scripts" : [
          "scripts/jquery-1.8.3.min.js",
          "scripts/background.js"
       ]
    },
    "name" : "Hola Unblocker",
    "icons" : {
       "128" : "images/icon128.png",
       "16" : "images/icon16.png",
       "48" : "images/icon48.png"
    },
    "description" : "Access blocked content from any country in the world, free.",
    "browser_action" : {
       "default_popup" : "popup/popup.html",
       "default_title" : "Hola Unblocker",
       "default_icon" : "images/icon19_gray.png"
    },
    "minimum_Chrome_version" : "18.0.1025.168",
    "manifest_version" : 2

Looking at the last bit of configuration from the top down we are dealing mostly with meta data for the extension. The extension
version number is used when doing update checks.

The background section indicates which script files are loaded into the background for later use. In the case of hola unblocker it is simply a copy of jquery and its own background.js file which we will analyze later.

Next up is the name, icons and description that will be show for the extension in various locations.

Next the browser action section is defined. The browser action loads the badge for the extension into Chrome and also defines its icon and hover title. The popup.html that is defined here is another file that is usually with investigating depending on the type of extension being analyzed.

Finally the minimum Chrome version and manifest version indicate what version of Chrome the user needs to use. The minimum version usually affects users that run version of Chrome that are installed from a repository like the ubuntu repositories. The manifest version indicates that the extension supports many of the newer security features implemented by Chrome XSS protections. It is usually a good sign when the manifest version is set to two, it should be noted that manifest version one is being phased out slowly by Google so at some point all extensions will implement version two.

Background.js

The background.js file is usually where the meat of most of extensions will be. Keep in mind though, that the functions stored here are more akin to functions stored in a linked library. That is they are not executed unless called. This is due to the fact that only code in the background.js file can access various Chrome api’s. As such the bulk of the functions that need API have to reside here, also since this is just a JS file with special access any code not located in A function will be automatically executed the first time the extension loads or is reloaded.

Depending on the length of the background.js file it is sometimes useful to get a high level overview of the functions stored in the JS file. This allows us to pick targets with which to look at a little more specifically than reading the entire JS file from top to bottom. To extract the functions from a file I wrote a small node.js script which parses a JS file and list all named functions. Now while I understand this script will not work in all cases it was enough for the current extension I am looking at and provides enough information to dig further.

The list below is generated from this script:

Command :

node .\extractFunctionsFromJSFile.js background.js

Function List :

vercmp
set_sites
pac_url
get_opt
update_ui
get_state
set_state
handle_auth_req
set_saved_state
client_cb
check_client
log
clog
handle_install

By scanning over the list there are already 2 functions that are of interest, the first is pac_url which thanks to some prior googling points to the fact that the extension is using pac config files. Pac configuration files for those not in the mood to Google it are simply __P__roxy __A__uto __C__onfig files. Now assuming that wasn’t self explanatory the configuration files will allow a browser to select a proxy dynamically based on a given URL and configuration.

The second function of interest to me is the handle_auth_req , which just from name tells us that it deals with some level of authentication. Starting with the second function (because it sounds more interesting) we perform a quick search in the background.js file. This returns 2 results, the first is the function and the second is its usage.

 function handle_auth_req(details)
 {
     if (!details.isProxy || details.realm!=="Hola Unblocker")
         return {};
     return {
 	    authCredentials: {
 	      username: "proxy",
 	      password: "*************"
 	    }
     };
 }

Even though I wasn’t looking for a fail we now find that they hard coded their user name and password for their proxy. For a malicious user this provides plenty of fun , for us we just note it down and continue the analysis. The second usage of the function is in the code that runs automatically when the background.js file is loaded (line 274). Looking at the line above we see a comment indicating that they wanted to protect their proxy from automated scanners which at least shows some effort to keep things kind of secure from their side.

Sadly when looking at the actual line of code we see the following:

Chrome.webRequest.onAuthRequired.addListener(handle_auth_req,
    {urls: ["<all_urls>"]}, ["blocking"]);

We see that they setup a blocking request for any URL that requires authentication , the obvious badness in this is that while using the extension any URL that is loaded goes through their proxy. This in turn will block all requests till the authentication request resolves. Now we can assume that they only handle a select set of URLS (yet to be verified) which will mean not much of an inconvenience. But if they end up proxying all URLs by mistake, any url navigated to would be blocked until their listener finishes execution.

Now taking a step back and looking at the original function that might be of some interest was pac_url. Looking at the function in detail it doesnt really do anything particularly interesting, instead it generates the location where the pac file can be downloaded from. With some simple mental match we can work out what is that location that will be returned (which looks something like this)

  https://client.hola.org/proxy.pac_dev?browser=Chrome&ver=24

Navigating to that url will generate a proxy pac file that your browser will understand. As an aside for a browser to use the proxy pac file it needs to contain a JavaScript function with the following signature:

  function FindProxyForURL(url, host)
The hola extension provides this method and two basic other helper functions which it uses to do some neat little proxying. For example their code will not allow any calls to the following extensions ( gif png jpg mp3 js css mp4 wmv flv swf json) on the Pandora site to go through their proxy. This should save them the effort of having to proxy every single resource on a site they want to provide support to.

Another interesting and slightly worrying function that we come across buy reading through the file is the clog function. This function essentially makes a remote call to their remote servers with a message their choice + a uuid which I can only assume is to uniquely identify users.

This is an example of the get request that is made to the server:

https://client.hola.org/proxy.pac?browser=Chrome&ver=1.0.173&uuid=001186cf98ad20743db252565a3fa7a9&id=123&info=123123&clog=1

The uuid is unique on my pc and the id / info are just parameters passed to the function. With this info sent as get requests it will be very easy to compile this data into a db and do analysis over time against users that use their service. This kind of data leakage is slightly worrying but if we have access to this source code (which you would if you have been following along with this blog post :) ), you can simply remove the call’s we don’t like and recompile the extension and it will work the same as before.

If we perform a quick read through the rest of the background.js file we will notice that there is not much more that is interesting in the file and aside from the code at the end of the file that sets up the blocking listeners the rest of the functions remain available but not used by the background file.

Popup.html + Popup.js

The next part of the extension that we will be looking is the popup related files. These are popup.html which handles displaying of the UI the badge button is clicked and the popup.js which will handle an JS related calls since inline js is not allowed in version two of the extension manifest.

The popup.html is fairly simple as there is no UI related content for this extension, it simply contains a couple base images to display to the user.

The popup.js file does not do too much either. It starts of by adding some dynamic html to the existing popup.html page so that a user can enable and disable the extension at will. It also adds some simple UI features such as making sure the dynamic links that are added actually open their respective web sites. Aside from these basic functions, there is not much more we can look at with these two files.

Fin

With those being the last pages to look at there is not more that we can analyze with this current extension. Yes they do some weird things in their code and there are some issues that might worry a more security conscious user, but on a whole the extension is simple yet effective. If a person was so inclined it would be fairly easy to generate a custom pac file and extension that would use your own personal proxy server that you have more control over. This could then be distributed to people that you wish to share access with, this is a fair amount simpler than asking a user to setup various bits of configuration.

In conclusion the analysis of an extension is fairly structured. First grab a copy of the extension, then extract the extension to get a list of the files contained within. Next analyze the manifest.json to determine files which files are being used, where they are used and that they are used for.

The next part of the analysis is a bit subject , depending on your style either analyze from the background up to the UI or analyze from the UI down to the background. The background first analysis usually tends to provide a more structured approach as the functions are usually stored in and easily parse-able state.

Cleanups and Forward

So the question now is what happens after your analysis has been completed (as in the case now). My first plan is to automate the entire process. While this means simply converting each of the steps into a script it takes a little time to test all this so I am putting it off for a bit. If on the other hand you wish to build the script yourself please go ahead and build it, just let me know so that I don’t duplicate work that may be better than mine.

The other thing that I wanted to do now that I know what was wrong with the extension or rather what I didn’t like I can patch the extension to work around these issues. The two issues that I wanted to patch were the fairly open permissions and the constant logs to the authors remote servers. Patching the remote server calls are pretty easy, simply delete the clog function and then remove any calls to the clog function as well. While I was at it I fixed a few tabbing issues to make things easier to read.

So without further ado here is the diff file for my patch, https://gist.github.com/4676624. This diff removes the clog file as well as change the permissions within the extension. All these changes still allow the extension to function as normal.

With that out of the way I am not sure if I missed anything in my analysis or if there is anything further that I could have done. If you feel I should change anything or add more information please feel free to contact me.

Ghetto Dynamic DNS

Story goes like so , I wanted Dynamic DNS with a custom domain but I didnt want to pay the costs the dyndns etc were asking so I rolled by own with the help of Amazon Route53.

How To

  1. Register your domain in the Amazon Route 53 UI.
  2. Update your domains records to point to the DNS servers given to you in step 1.
  3. Wait for your changes to propergate
  4. Install the following python packages on your local machine (pystun and cirrus).
  5. Updated your ACCESS key,SECRET key and domain in this script
  6. Run the script on the server/pc that is behind the dynamic IP
  7. Maybe (if you want) add the script to cron and let it run automatically

This is not my finest scripting so feel free to poke and jeer (altough rather send me fixes). Someone is going to bring up the fact that you shouldnt store your keys in the script, if you feel this way feel free to replace those 2 lines of code with

access_id = os.environ['AWS_ACCESS_ID']
secret_key = os.environ['AWS_SECRET_KEY']

Which will get your keys from your current bash/shell session.

Finally the code is fairly easy to read but ping me if you need help.

Because I can


I love building stuff and it should come as no suprise that I have tinkered with the arduino. Sadly though I have never really built anything uber usefull with it. The Butler project got dismantled after moving houses and I havent had a need to use it at my current residence.

So with that out of the way this is the first bit of code that goes beyond hello world that I have written for the arduino. It will display a message of your choice on an LCD and then slowly scroll to the end. Once it reaches the end it will then reverse direction and scroll back to the start. If you dont want it to reverse direction the code is easy enough to change so that it starts display the start of the message instead.

You can find the code over @ https://gist.github.com/3299197

If you find any bug/issues ping me and I will fix it or submit a pull request.

Because I can


I love building stuff and one of the many things that fascinates me is seeing the many bots that have been built to play various MMORG games.

While these bots are usually built to aid people in farming either currency or items they are an impressive feat of programming. So recently while discussing the latest news about various farming techniques on #zacon with @Hypn, I wondered what would it take to build a bot that played a game for you.

As it turns out, even the simplest of bots is not all that simple. Before jumping straight into building a MMORPG level bot I figured I would start small. If anything this would allow me to get to know some of the basics of building a bot that plays a game before trying a large project and getting entirely lost.

Goals and decisions


The goal then was to build a bot for Bejeweled on the Windows platform using the AutoIt scripting language in minimal time.

A secondary goal was that once the bot was completed in AutoIt it should possibly be re-written in another language. This would allow us to use the logic from AutoIt and apply it to the new selected language.

Bejeweled was chosen for 2 reasons: first, there was no monetary attachment with winning and secondly it’s a fairly simple game to play. With it being a simple game, it was hoped that building a rule set for the bot would be easier.

Windows was chosen simply because the majority of games are built for Windows and any knowledge gained could be applied in writing future bots.

AutoIt was selected to script the interface because its simple language hides a number of pretty advanced features. Mostly importantly it allows you to control the mouse and keyboard, which allows you to mimic what a human can do easily.

Research


If you haven’t played Bejeweled, Wikipedia’s entry (http://en.wikipedia.org/wiki/Bejeweled) is a reasonable introduction. The game has a long history and has been ported to a huge number of platforms.

The information we needed in the beginning to get started was the following:

* The location of the top left corner of the board.
* The size of each block.
* The location within each block where we could find a pixel that would indicate the color of the block.
* The hex/decimal value of each color to be used for matching (this is statically scraped and stored.)

Each of these bits of information can now be found in the README

Implementation


Step 1 : Acquiring the board co-ordinates

Acquiring the co-ordinates of the board’s top left corner is important as we use it as an offset when calculating anything else on the board.

The first possible way of getting the start co-ords is by asking the user to click on a specific location. The is inaccurate and, depending on the user, subject to error.

Instead, we find the starting position by using a nifty function provided by AutoIt called PixelSearch. PixelSearch works by asking for a specific color in the hex format and then searching within a bounding box. The bounding box is indication by a top left position and a bottom right position.

Research by @Hypn found that the starting position of the board could be found by looking for a pixel with the color 0x37372C. Of course, the problem with this kind of heuristic is that any pixel with the same color could introduce a false positive to the heuristic, and throw subsequent calculations off. While one could extend the heuristic to include a search for additional known pixels, we recommended that you run the game in full screen to avoid any conflicts.

To grab the starting co-ords using the information above we run the following piece of code before anything else.

   Local $coord = PixelSearch(0, 0, 1440, 900, 0x37372C)

This will store the co-ordinates of the magic pixel in the $coord variable. The X and Y values can be accessed by accessing the variable as an array. This would mean that $coord[0] == X and $coord[1] == Y.

Step 2 : Build colored memory map

Now that we have acquired the coords of the board ( which is stored in $coord ) we need to build a representation of the board in memory. This will allow us to solve matches by looking for specific patterns instead of brute forcing every square and hoping that a match is found.

Func buildColorMap()
   Local $ucm[8][8]
   ;X and Y relate to the x-axis and y-axis
   For $y = 0 To 7 Step 1
      For $x = 0 To 7 Step 1
       $ucm[$y][$x] = PixelGetColor ( $coord[0]  + 18 +( $x* 37), $coord[1] + ( ($y +1) * 37) - 11 )
      Next
   Next
   Return $ucm
EndFunc

Local $ucm[8][8]

In the piece of code above we start of by first creating a 8x8 2-dimensional array which represents rows and columns on the board.

For $y = 0 To 7 Step 1
  For $x = 0 To 7 Step 1
      $ucm[$y][$x] = PixelGetColor ( $coord[0]  + 18 +( $x* 37), $coord[1] + ( ($y +1) * 37) - 11 )
  Next
Next

We then run a 2 level loop which calculates the exact coord of each pixel where we want to scrape the color from each block.

During each iteration when we have calculated the X and Y coord we use a function provided by AutoIt called PixelGetColor to get the color for each block. This is stored in the relevant position in the array for access later.

Converting coords to words

Local $coloredMap[8][8]
for $i = 0 To UBound($crd) - 1
  For $x = 0 To 7 Step 1
     Switch $crd[$i][$x]
        Case 16775956,16776960
           $coloredMap[$i][$x] = "Yellow"
        Case 16206352,16471568 ; we also match 15* as red in most cases
           $coloredMap[$i][$x] = "Red"
        Case 16777215,16514043
           $coloredMap[$i][$x] = "Silver"
        Case 12749289,12885488,11902199 ,13021688,12029936,12157930
           $coloredMap[$i][$x] = "Purple"
        Case 5420321,4303909
           $coloredMap[$i][$x] = "Green"
        Case 16675594
           $coloredMap[$i][$x] = "Orange"
        Case 6220025,5565436,8249328,5892858,4911103,6353407,6547447,7921905,6874614,7202036
           $coloredMap[$i][$x] = "Blue"
        Case Else
           ;If the color code is not matched above then we do a rough check
           ;This may be wrong so we mark with a * (note this has yet to match a wrong color)
           $checker = StringLeft(StringFormat("%i",$crd[$i][$x]),2)
           Switch $checker
           Case "16"
                 $coloredMap[$i][$x] = "Orange"
              Case "15"
                 $coloredMap[$i][$x] = "Red"
              Case "52","75"
                 $coloredMap[$i][$x] = "Blue"
              Case Else
                 $coloredMap[$i][$x] = $checker
           EndSwitch
     EndSwitch
  Next
Next

The conversion of a pixel to a word is done in the code above and sadly is quite messy. We start by kicking of a 2 level for loop to itterate through each of the blocks in the grid. We then extract the decimal value from the original array variable and run it through a switch statement.

Each of the cases in the switch statement was manually extracted by me running the game multiple times and extracting the value for each square in its various forms and then saving it.

In each of the switch case statements saves the word to an array which will be used later to solve the game.

While writing this it just occured to me that this second step is probably not needed and could have been done originally when the map was first scraped.

Step 3 : Solving a colored map

The solver function takes as input a color map which was generated in the previous step. With this input it iterates though each of the colors in the map and performs various checks.

Unfortunately my AutoIt skills kinda suck and I was not able to make this part of the code generic enough. As such each of the check functions is mostly a copy/paste template with various parts of the rules modified.

Anatomy of a rule

Rule Checks

The check function consists of the following steps. First, depending on the rule we are checking we need to make sure that there is enough space to the left and right of the current block depending on what the rule dictate.

For example to check the following rule :

O
XOO

X is an invalid square (and, in this case, the one being checked)
O is a valid square.

To solve the check above we need to swap the invalid square with the square above it. Before this rule can be run it needs to be run in a generic way. As such we need to ensure that there is a square above the current square and 2 to the right of the current square. We perform this check by checking if X is less than 3. We also need to check that Y is greater than 0. If these checks all execute correctly then we can perform a click on the respective squares.

Rule Solutions

In this case the check would execute code to click on the current square and then execute code to click on the square above. The clicking code is fairly generic but unfortunately I did not realize this early on in the project so there is a fair amount of duplicate code.

Step 4 : End Game

Once all the above is in place the code executes till the board is cleared and we get the end game message.

The end game check is run before everything to prevent the bot from trying to play an extra round that would not be needed.

For debugging purposes I only ran the bot for 10 runs , but at the the bot could safely run with a ‘While True’ and not go into a weird state that could annoy the user.

Conclusion

Building the bot was quite fun and, once I got past the initial learning curve, the development was pretty easy. Sadly though there are still some holes in the bot which I don’t feel like fixing currently.

The first is that there are a couple of rules that are missing and as such there are times when it will seem like the game does nothing. If this happens solve a match and it will continue as normal.

The current search area for the game is currently working on my screen resolution. This should be changed to be a bit more dynamic and look at the users current screen res. I know that @Hypn has code that will do this but I have yet to add it.

It’s still a work-in-progress; if you like it, or have comments, let me know.