Monday, December 30, 2013

Dragon NaturallySpeaking, Vimperator, and Emacs

My wife got me Dragon NaturallySpeaking for Christmas. It's been fun learning how to use it, but it's been frustrating learning how to browse the Internet with it. I ended up saying a lot of hotkeys while working with Firefox, such as "press ctrl+k" and "press ctrl+pgdn".

Then I discovered Vocola, which allows you to define your own custom voice commands. That, together with Vimperator, which I have already been using, it is now a breeze to use Dragon to browse.

I just installed Vocola according to the directions found here.

Here is my .vimperatorrc file:

set hintchars='0123456789'
inoremap <C-a> <Ins><C-a><Ins>
map J gT
map K gt
highlight Hint font-size:200%;color:white;background-color:red;padding:2px;

Simply put this file in your home directory. If you are using a Windows operating system, this is something like c:\Users\djhaskin987.

Then, with Vocola installed, use Dragon NaturallySpeaking to open Firefox, and say "edit voice commands", and paste the following:

 # Voice commands for firefox
visit <_anything> = {Esc}":open $1";
close tab = {Esc}d;
open tab <_anything> = {Esc}":tabopen $1";
follow <_anything> = {Esc} f $1;
follow tab <_anything> = {Esc} F $1;
next doc = {Esc} J;
previous doc = {Esc} K;
back = {Esc} H ;

Viola! Easy browsing by speech. Just say "follow ", or just "follow" and then say the number that pops up next to the link. To open a webpage, just say "visit" followed by the address. Vimperator will give you a menu of options. A couple of "press tab" and "press enter" commands later and you're in business.

I also plan to use Vocola and Dragon NaturallySpeaking to improve my programming by defining some voice commands for Emacs, sort of like Tavis Rudd, only Vocola is way easier to define voice commands with than Python.

Here is my Vocola file so far for Emacs:


# Voice commands for emacs
<number> := 1..99;
open file = {Ctrl+x}{Ctrl+f};
boom = {Enter};
undo = {Ctrl+Shift+_};
die = {Alt+Backspace};
back = {Backspace};
whack = \;
um = {Alt+b};
curly = "{";
close curly = "}";
paren = "(";
all in = ")";
the <_anything> = $1;
start = {Ctrl+a};
end = {Ctrl+e};
camelize <_anything> = {Alt+x} camelize {Enter} $1 {Enter};
left = {Ctrl+b};
<number> up = {Ctrl+u} $1 {Ctrl+p};
<number> down = {Ctrl+u} $1 {Ctrl+n};
<number> left = {Ctrl+u} $1 {Ctrl+b};
right = {Ctrl+f};
up = {Ctrl+p};
down = {Ctrl+n};
go = {Alt+f};
scroll down = {Ctrl+v};
scroll up = {Ctrl+v};
cut = {Ctrl+k};
yank = {Ctrl+y};
nay = {Alt+y};  

The function "camelize" is an emacs lisp interactive function which takes a phrase like "the grand old duke of york" and turns it into "theGrandOldDukeOfYork", kind of like what is found in the emacs wiki, but changing from space separated instead of underscore separated, and making the function interactive as well.

I hope to also use Emacs templating like Tavis Rudd does in his video  to make functions and classes by speech easier in Emacs.

EDIT: I'm putting my vocola files for easy browsing and programming by voice in a git repository. Pull requests wecome!

Thursday, December 19, 2013

Mancakes

What do you want? I dabble.

I have this amazing pancake recipe I made up.

I take this from the original "German Pancake" recipe my wife found, but that was baked in the oven. This is in a pan.

It's called a Mancake rather than a pancake for the following reasons:

  1. It has tons of eggs (protein) in it
  2. It is dead simple -- only requires a 1/2 cup size measuring cup and a teaspoon. (For those of you who subscribe to the metric system, a 1/2 cup is 120ml and a teaspoon is 5ml).
  3. The recipe serves one (1) hungry person (likely, therefore, a man).
Here it is:

1/2 cup milk
1/2 cup (whole wheat pastry) flour
2 eggs
4 teaspoons (olive) oil (20ml, or 1/12 cup)
1 teaspoon baking powder, or 1 teaspoon baking soda and 2 teaspoons (apple cider) vinegar
1 pinch salt

Combine ingredients in a slightly-larger-than-normal bowl using an ordinary fork. Pre-heat your (electric) stove to a "6" (just past medium heat) while your skillet is on that stove. Pour 1/2 cup of batter onto the skillet. When there are a bunch of bubbles on the mancake, place your spatula under the right half of the large mancake and flip (or, for lefties, the left half of the mancake). Wait one  minute, then take out of the pan and repeat. Makes 3 mancakes. Serves one. Enjoy.

Monday, December 16, 2013

Moving On

So I could put another post on here about the genetic algorithm, but everything I wanted to say on the subject has been said about that particular algorithm.

Rest assured, I have many more ideas about optimization algorithms. But let me tell you -- it's getting frustrating. A lot of my ideas about new algorithms for optimization include this RankedSet idea... and it doesn't exist in standard libraries.

Michał Marczyk made one for Clojure called avl.clj (great job, Michał!), and AVL tree implementation of what I call the RankedSet, but the genetic algorithm as below described (and other algorithms bouncing around in my head) need faster insertions/deletions and virtually no lookup. That is precisely what red-black trees are better at over AVL trees.  I need a red-black tree based implementation.

So I decided to learn how to make one.

I am currently going through Chris Oksasaki's "Purely Functional Data Structures". (Even just reading the first chapter, you can tell Rich Hickey idolizes the guy, and for good reason.) For those who don't know, this is a seminal work on how to design your own rock-star data structures. A quote from the first chapter:

Rather than attempting to catalog efficient data structures for every purpose (a hopeless task!), we instead concentrate on a handful of general techniques for designing efficient functional data structures...Once you understand the techniques involved, you can easily adapt existing data structures to your particular needs, or even design new data structures from scratch. (p. 4)
I am even learning OCaml so that I can go through it better (most of the examples in his book are in standard ML, which is to OCaml as C is to C++, as far as I can tell).

Hopefully at some point into it I'll be able to make a RankedSet, share the implementation, and keep writing about awesome algorithms.

Tuesday, September 3, 2013

The New Genetic Aglorithm: What Makes It New

In the last post of my 'New Genetic Algorithm' posts, I talked about the RankedSet, which is the backbone of the algorithm, and what makes the algorithm fast. In this post, I'll talk about what makes the algorithm easy to use.

Keeping it Simple

Nobody likes using a tool with a bunch of dials and switches on it. Would you rather use a drill with five dials on it with labels on those dials that look like they have made-up words on it? And you're thinking, "I just want to drill a hole!"


You are not alone. Most people, when faced with using something like a drill, or washing machine, or algorithm, would much rather use an 'on' switch than anything else.

The thing is, most genetic algorithms that I've come across have five dials on it, which all need to be set properly in order for the algorithm to work:
  1. The fitness function
  2. When to stop
  3. What the kill-off rate will be
  4. What the mutation rate will be
  5. The size of the initial population


I believe that the genetic algorithm is only used, either in heavy industry, or academia, precisely because it has five dials and switches. One almost needs an algorithm to solve the problem of how to set these "dials"!

So, a major goal of this new genetic algorithm is to take the algorithm to the masses. It asks the question: "What would a normal person care about?"

A normal person would care about the first two things in the above list:
  1. the fitness function and 
  2. when to stop.


A "fitness function" is a set of instructions given to the algorithm to score the solutions it finds. To stick with our previous post's example, the "solutions" represent which strategy to employ in getting food for my family on the way home from work: Do I get takeout? Do I get fast food? Do I go home and cook something? etc. It's easy to tell an algorithm "I want food quickly, and I'm willing to spend a little to get good food, too." A statement like this is, in essence, the "fitness function". With this fitness function, the genetic algorithm would score 'getting takeout' high, and 'cooking at home' low. The fitness function is both easy to understand and essential for the algorithm to work, we'll probably keep this as an input to the algorithm as it is.

The question of "when to stop" is also relatively straight forward to answer for the user of the algorithm, but hard for the algorithm itself to answer. Something like "stop when you can't find anything better" or "keep improving your solution until 7am tomorrow" is both an easy and a relative piece of information for a user to give the algorithm, while it is hard for the algorithm to guess at when to stop. See, the genetic algorithm can just keep. on. going. It will find try to find better and better solutions until you tell it to stop. This makes sense. There are a million ways, for example, to get food at the end of a work day. However, I usually know as a user when I want to stop looking for a better solutions. Telling the algorithm when to stop is telling the algorithm when the solution it's trying to find is "good enough". Therefore, this parameter also makes sense for the user to provide.

It's the other three that can be thorns in a user's side. There is no intuition that can help the user determine what 'kill-off rate', 'mutation rate', or 'population size'. These three parameters need to be set and set well, but the only way a user would know how to set them is trial and error. The New Genetic Algorithm is cool, not only because of its back-end efficiency, but its front-end user friendliness: it knows how to set those three parameters for you.

Setting The Parameters


Kill-Off Rate


Most genetic algorithms will ask you to specify a kill-off rate: a percentage of your population which should be killed off at every step of the algorithm (each "season", if you will). It asks the question: at each iteration, how many of the "weak" of the population should die? This rate also dictates the number of new solutions created at each step, since in general the algorithm should deal with a fixed population size.

So what should our algorithm choose as its kill-off rate? 5%? 10%? 2%? All of these choices seem so arbitrary. There really isn't a choice I have found that makes more sense than the others. Too many killed off, means you won't get as good of results as you could have. Too few, and it will take a while to get there.

It should be stated here that kill-off rate is kind of related to something which in other algorithms is called "learning rate". The higher the kill-off rate, the faster the solutions will improve; however, the slower the kill-off rate, the better the results will ultimately be, even if it takes a while to get there. The kill-off rate represents the 'price/quality' tradeoff: more quality = more time = smaller kill-off rate; less quality = faster running = greater kill-off rate. What would be really cool is if the algorithm could choose the kill-off rate based on how much time it was given to run (when to stop). That way the user only need specify when to stop and the algorithm knows how long to run.
In that light, and for reasons that will become clear later on in this series, I simply chose to kill of one (1) solution per step. This essentially means that the population size ultimately decides this kill-off rate. If I have 100 solutions in my population, that means the kill-off rate is 1%. If I have only 10, it means my kill-off rate is 10%. This may seem like a bad thing, but what it does is that it allows us to choose kill-off rate based on population size. In other words, by making kill-off rate dependent on population size, we can control how fast the algorithm learns by only regulating one variable (the size of the population). This makes thinking about how to tell the algorithm how fast or slow to learn much easier. It also makes the single step of the algorithm very fast and gives it more predictable results.

Mutation Rate


A small word should be put here about how creation of new solutions in the population works.

Mutation relies on the fact that, all bit strings of length n specify a solution. It flips one of its bits at random.  The genetic algorithm doesn't see the solution for a solution; rather, it just sees a bunch of 1's and 0's. This is consistent with nature. Nature doesn't see a bacteria for what it is; it just sees a bunch of DNA. The 1's and 0's are the solution's "DNA". Thus by flipping one of the random bits in the solution, we mutate the solution.

There is a small amount of math involved here, but not bad; just enough to keep things interesting.
We want to choose a mutation rate (the percentage of the population getting this random bit flipped) intelligently. The new algorithm is herein designed to choose it to maximize the diversity of the population.

Here in our new algorithm, we want a highly diverse population. If you have many solutions that are roughly the same, what is the benefit of a population at all?

A Side Note About How We Create New Solutions


In the new algorithm, we will use single crossover to create a new solution. There are lots of ways to create new solutions, but as we intimated in the beginning, we like to keep things simple in this new algorithm. This just means we pick a random number between 0 and n (the size of the solutions -- the number of bits they contain). Call that number p. The first p bits come from the first p bits of one solution (the "mom") and the rest of the bits come from the other solution (the "dad").

Who shall we chose to make new solutions? the two best, or part of the 'party' of the elite of the population? Perhaps we could, but here in our new algorithm, let's keep things simple. We will simply take two random solutions from the population and pair them up to create a new solution.

Why This is Important to Choosing Mutation Rate


Now let's reiterate our problem. We need to choose mutation rate to maximize diversity of the population. In order to ascertain diversity, we need a population sample.

But we just chose two solutions at random to create a new solution. Why don't we just use the two solutions we just chose as a small "sample" of the population of solutions? That way, we only need to retrieve two solutions per iteration, kill one off, create one, and possibly mutate one. Genious! then the iterations can be done very fast.

How do we determine if the population is "diverse"? Simple: we count up how many bits at each position from 1 to n in the bit strings of the mom and dad solutions are the same. The more that are the same, the more "the same" the population is and the more it is due for a mutation. Then, if we do decide to mutate a solution, we will simply mutate one of the solutions and re-insert it back into our population.

We will use this formula to determine the probability of whether we should mutate one of the parents:

P(m) = 4*(1/2*k/n - 1/2)^2
 
Where P(m) is the probability that we should mutate one of the solutions, n is the number of bit positions for solutions, and k is the number of bit positions where both parents share the same value.

This is a nice, easy-to-compute, bowl-shaped function that actually looks a lot like the information entropy function for two values. That's math-speak for "it worky."

Think about it -- If about half the bits are the same, which you would expect that from two random solutions, the probability of mutation is small, since the two solutions are expected to look pretty different from each other -- around half their positions are different. The closer k/n gets to 1 or 0, though, that value goes to 1, or 100%, meaning that one of them really should be mutated, since each solution looks too much like the other, or that they look like polar opposites, both types of which we would not expect in a more diverse population.

After we have computed P(m), then we just get a random number between 0 and 1. Call it r. If r < P(m), then we mutate one of the new solutions and re-introduce it to the population. If it isn't, we simply put both parents back un-mutated.

There! we have chosen a good method for choosing mutation rate dynamically. It is chosen at each iteration, is easy to compute, and makes sense for the goal.

Population Size


As New Genetic Algorithm designers, it's our job to figure out how best to determine population size. This is the size of the RankedSet discussed in the last post of the number of "indigenous" solutions that we'll keep around. We need a base population if we want to kill some off and create more and still have the genetic algorithm to work. So how do we choose population size?

Remember earlier, when I said that by determining population size and keeping the kill-off rate at one solution, we determine the 'learning rate'? When we have a small population size, say 10, the learning rate/kill off rate is 1 / 10 = 10%. This means the algorithm will improve fast, but also plateu fast.

Why don't we just wait for it to plateu at some improvement level, then when it does, we increase the size of the population? By doing so, we decrease the learning rate, allowing the algorithm to find an even better solution, even if more slowly.

To keep things simple, let's just say this: We'll start off at a population size of 16. When we find that no more "improvement" is to be found within the population, we double population size and continue iterating. We keep doing this until we are supposed to stop, which is something that the user supplies!

Simple, easy, intuitive, and easy to implement. These are the goals of the new genetic algorithm.

Friday, July 5, 2013

The RankedSet: Optimization's Best Friend


This post is the second post about the 'New Genetic Algorithm' that I posted about previously.

What We Need

See, when you're running the genetic algorithm -- that nifty way to solve problems by computer -- you need to store different solutions to the same problem. For example, suppose I need dinner for my family tonight. Do I cook at home, grab some fast food, or buy a TV dinner at the local grocery market? These are different solutions to the same problem: my family needs food.

While the genetic algorithm (any genetic algorithm, including this new one) is running, it needs to maintain a "population" of such solutions. It keeps around, say, 100 solutions, kills off some, and adds new ones, but it always stores so many solutions (e.g. 100) at all times. In my new genetic algorithm, it will be important for reasons explained in a later post for the algorithm to be able to figure out the rank of each solution. Rather than knowing how much better one solution is from another, I feel it is important only to tell which solution is better than the other. When we make a buying decision, it does not matter if one option is a little bit better or way better, we usually just take the better option. This idea is important, and in the algorithm it will become important to be able to find a solution within the "population" and be able also to delete it based on its rank. I need to be able to say to the data structure, "give me the 45th best solution" and out it pops. I also need to be able to say, "kill off the 32nd solution". So, in order to store the population so that I can easily find, delete (kill off), insert (introduce) a solution into my population, the question is: what is the best way to store my population in a computer?

It turns out, there really isn't a data structure in any standard library I've found that can do the job. By the way, for those purist logicians who are craving a formal specification of what 'the job' is, sink your teeth into this:

  • The data structure needs to store information about each solution. If this is the 'bring home the bacon' example, it needs to store stuff like 'store location', 'what to buy', etc.
  • The data structure needs to store information about the rank (e.g., best, second-best, 43rd-best) of each solution.
  • The data structure needs to have efficient retrieval, deletion, and insertion. It needs to be able to delete a solution given its rank, and retrieve a solution given its rank.

Fanfare, Please: Enter the RankedSet

Why We Like It

The RankedSet will solve the problems I have stated above. You can store a solution to a problem in it, you can tell the set how great the solution is, and it will assign ranks for you based on how good each solution is. It provides retrieval and deletion based on rank and all retrieval, deletion, and insertion runs in logarhmic time. For those non-nerds out there, that means it does what we want it to, and it does it at speeds that take the cake.

Here's how it works: when we insert a solution into the set, we give it two values: how good the solution is, which we will call the index, and then the information about the solution itself, which we will call the payload. Since we'll rank the solutions in the set based on the first value, it needs to be a number -- a score, or perhaps in some cases, a monetary amount, etc. The payload can be anything we want, but in this case it is information about the different solution that we will put here.

How It Works

But how does it work, you say. That's the juicy part, right?

Computer Scientists know that what I am about to describe, is, basically, a Binary Search Tree, or BST. Each solution has at most two "children" solutions--one with a greater index than itself, and one with a smaller index.
With each solution is stored three things:
  1. The solution information itself.
  2. How "good" the solution is.
  3. How many other solutions are "downstream", or, how many "descendants" the solution has (including itself).
It's number 3 in the above list that's the interesting bit. every other part of the description of how the solutions are stored is just a regular ol' BST, easily found in most any standard library in most any computer language. But no BST I've ever found has #3 in the above list stored in it, and so cannot function as a RankedSet.

Note that the value for #3 can never be empty. If there is a solution stored with no "children" solutions, #3 value is just "1", since it is the number of solutions that are descended from a node, including itself.

If every solution in the tree stores #3, then it becomes easy for the BST to look up a solution based on rank. Say that I need the 13th best solution in the BST. I ask the "root" node (the very first solution, with no parents and all children) to give me the 13th best. It asks the 'greater' of its children how many children it has. Say that number is 10. To few. It asks its lesser child again how many children that it has, which might be 20. So, it tells its lesser child to find the 2nd best solution in its family (13 minus the 10 best on the greater side, take away the root node itself, which leaves 2). If that sound hokey to you, then just trust me: it works, and it's fast.

Further explanation would boggle the mind of those of my non-egg-head readers, and is not necessary for those already familiar with binary search trees. Suffice it to say that caching the number of descendants of a solution, including the solution itself, allows us to find and delete solutions based on rank.

So What's It All Mean?


It means that we can now construct an algorithm that does not have to concern itself with how great a solution is, only if it is the best one. At the same time, it also means that the algorithm only has to know how good a solution is when it is inserted into the population, and from then on it need only talk about weeding out solutions that are weak, like the 75th best solution, and promoting the strong ones, like the 2nd solution. It means that it can talk about the rank of those solutions without ever actually retrieving them until it is time to kill the weak and create the strong. This makes our algorithm really fast and very effective.

Up Next

In the next post I will talk about how the algorithm uses the RankedSet, how it chooses how many weak to kill off and spawn, how it chooses how many solutions to mutate, and how easy those choices make it for programmers to use the new genetic algorithm to solve the next big problems.

Monday, May 20, 2013

A New Genetic Algorithm

I have recently been fascinated with the genetic algorithm, which is a method that allows you to solve pretty much any big problem via computer, like finding the most nutritional, affordable lunches for a week for your kids, or what's the best NFL schedule (funny that these problems are provably equivalent).

The idea is simple. generate a bunch of mediocre solutions, kill off (eliminate) the weak ones, use the stronger ones to sire more solutions, mutate a gene (aspect) of some solution or other just to shake things up, and repeat.

The problem is all of the different ways to do it. Should we mutate 5% of the solutions, or 10%? How many solutions should we kill off at a time? How many should we make? There are alot of design decisions to make when using this algorithm.

And then there's efficiency to think about. How should the solutions be represented (e.g., what stats should be encoded)? What data structures should be used? What complexity class is acceptable (has to do with how much time the program takes to run)? These questions may sound unimportant from a business perspective, but it's the difference in an app between 'wow, that was zippy' and 'that sure took a long time to load'.

The 'data structure/data representation' question is particularly important because not only does it deal with how fast the program runs, but how maintable the code base will be in the future. When, as a programmer, I code up a solution to a problem, it is very difficult to change the way the data is represented and stored in the future.

I have given all these problems some thought, and I have come up with (I believe) a very good solution. The data representation is versatile. The data structure is maintainable, efficient, and convenient. Best of all, I have found a convenient way to get rid of all those parameters (i.e., I have adequately answered all of the 'how/what' questions about the algorithm, listed above).

I will post a series of articles detailing what I did to create a genetic algorithm which needs only be given a fitness function and the size of the population of solutions -- nothing else. It is fast, usable in a highly varied number of fields, and its operation is intuitive.

My first post will deal with the data structure / data representation problem. My second will deal with removing the how-many-to-kill-off/how-many-to-reproduce/how-many-to-mutate problem. My third post will tie it all together. Let me know in the comments below to join the conversation and let me know how I'm doing as I go. I will try to answer any questions given.