Search This Blog

Tuesday, July 10, 2018

Mad Catz Config for Linux


One of my favorite computer mice is the Mad Catz R.A.T. 7.  I really enjoy all the features (mode buttons, forwards and backwards, size and weight customization) and it really helps speed up the workflow even in non-gaming use cases.  However, when using Debian, I encountered an issue in which the xorg display server causes the mouse to malfunction; it becomes unable to click, move, and select.  To recover from this state, one would usually have to restart the X Display Server - this method is very irritating.
After some research, I came across a solution from the Arch Linux Wiki.  Although you may have a different distro, the underlying issue is X's problems with extra buttons such as the mouse's mode button, not the operating system (I performed the fix on Debian).
First, get the exact name of the mouse with this command: xinput list | grep "id"
Please note that although this command works for most cases, you may need to check the system logs for the exact and correct device name.  Also, it is normal for "Mad Catz" to be repeated twice in the device name.  Then, with root privileges, create (or append to) a file in /etc/X11/ called xorg.conf.  Paste the following text in it (replace the name in quotes for MatchProduct with the name you received from xinput):

Section "InputClass"
    Identifier "Mouse Remap"
    MatchProduct "Mad Catz Mad Catz R.A.T.7 Mouse"
    MatchIsPointer "true"
    MatchDevicePath "/dev/input/event*"
    Option    "Buttons" "24"
    Option    "ButtonMapping" "1 2 3 4 5 0 0 8 9 10 11 12 0 0 0 16 17 7 6 0 0 0 0 0"
    Option    "AutoReleaseButtons" "20 21 22 23 24"
    Option    "ZAxisMapping" "4 5 6 7"
EndSection

Now, restart your X server and your Mad Catz R.A.T. mouse should start working!  I will be posting a follow-up soon on configuring more options with this mouse in Linux soon.

Thursday, July 5, 2018

A Good Way to Make Backups

This post is just to meant to remind me when I forget my backup procedure.  While most people would suggest a tool like Macrium Reflect, those tools need to run in an OS and often run very slowly.  Moreover, such tools often can only backup one operating system; many of them end up breaking dual boot systems and cannot even backup Linux systems properly.
My favorite option is combining two extremely powerful tools: Clonezilla (this tool can even perform network wide backups) and GParted.
To start off, download Clonezilla and use any tool of your preference to burn Clonezilla and GParted to a USB - this process creates the bootable USB.  Also, make sure you have a backup drive available.  Any drive that has more storage than the amount of data you have used up is fine (avoid large flash drives, SSDs, and SSHDs though).  Personally, I prefer to get a hard drive with the same dimensions as the hard drive bays in the computer; this way, I can easily swap drives when something goes wrong and clone the working backup onto the corrupted old drive.

Then, shutdown your computer, open the BIOS/UEFI settings, and set your computer to boot from the Live USB with Clonezilla.  Choose the default (usually the first one listed) boot settings when Clonezilla brings up the boot menu.

Next, select the appropriate language and do not change the keymap when prompted.  Then, do not enter console; simply just start Clonezilla. Make sure to choose device to device afterwards - this method will create a full hard-drive clone.  Afterwards, make sure to choose expert mode; this mode will give us the ability to create partitions proportionally, which is extremely beneficial if your back up drive is not the same size as your current drive.  Additionally, make sure you choose disk to disk as the method of cloning; REMEMBER TO PICK THE CORRECT LOCAL AND SOURCE DRIVES WHEN PROMPTED.
Then, use the space bar to select icds when the advanced mode menu pops up and select k1 when the extra parameter table pops up.

Afterwards, you can choose what Clonezilla should do after cloning and agree to the next few prompts (please check your source and destination drives - Clonezilla will double check with you here).  Once you have completed these steps, cloning should begin; the time it takes depends on the amount of data you have, but it should be relatively fast (a few hours maximum).
After the clone, you should have a working backup.  Clonezilla has worked on all my multiboot systems so far (Linux with Windows, Hackintosh with Linux with Windows, Linux with some new experimental OS, Windows with another version of Windows); the other hard drive backup software I used before do not work as well as Clonezilla.
Once your current drive becomes corrupted, you can use GParted to transfer your last working backup onto the corrupted drive.

Simply boot up GParted (or any other Linux distro that comes with GParted) and copy and paste over all the partitions (right click on the partitions in the bottom half of the user interface).  Again, please select the correct drives as source and destination.
I hope this guide will help you learn to make successful backups.  Making backups are extremely important, especially when you undertake large scale projects.  They can help you recover very quickly from broken drives, corrupted OSes, and even viruses.



Wednesday, July 4, 2018

Implementing a Multithreaded Spider

When developing my website view generator, I wanted to enhance the application by adding a spider that crawls the given site; with this method, the user will get better results when checking their website's views.  However, I ran into a quick problem.  The classic web spider that ran with a single thread via iteration or recursion is simply to slow (especially recursion).  Hence, I decided to implement my own multithreaded spider (you can check the code out on my WVG repo on Github - just download LinkQueue.java, LinkGetter.java, and Spider.java but as of now, the code is messy and needs to be improved).  In this post, I will be sharing with you what I did to implement this extremely fast type of web spider.
Before I start, I decided to make the spider's depth be dependent on the threads and make the user provide a link limit (the link limit will not be used exactly but will prevent the multithreaded spider from running on too many links).  Also, before any threads start, there will be the first thread which only spiders the given link.  However, all the threads should start simultaneously (in a for loop with a thread array) and a thread should only end if the thread before it in the thread array has ended - all the threads will process the links in their given queue (I implemented the queues with linked lists), which are constantly updated with links if the previous threads are alive - the previous threads will be adding the links they find to the following thread's queue.  With this method, you can have many threads run simultaneously, constantly checking their specific queues for new links to spider.
In my program, I added new links to an ArrayList and the next thread's queue (the next thread's spider will then spider those links) and parsed the links retrieved from the webpages' html via regex ("<a\\s+(?:[^>]*?\\s+)?href+=[\"']([^\"']*)['\"]" with case ignored); although regex is not the best method, it does take up less code and space while also working for most cases.
This explanation may seem very complicated, so here is a general diagram.
This diagram will suffice for all threads except for the first one (the only difference for the first one is that it spiders only one link and has no queue of links).

I hope this explanation will help you understand how to implement such a tool now.  Also, Happy 4th of July!