Tuesday, November 22, 2011

An Idea for a Probabilistic Approximation of the Solution to the Travelling Salesman Problem

In one of my classes here at Brigham Young, our group has been asked to come up with a unique idea for getting a good approximation to an optimal tour in the traveling salesman problem. Here's the idea I came up with. 

I wanted to bring statistics to the table, knowing about such algorithms as Monte Carlo Integration and others which took an unsolvable or hard problem and managed its size simply by taking samples of the bigger problem. The question was: How do you take a "sample" of an optimal traveling salesman tour?

The idea came to select three cities at random, and then use Djikstra's Algorithm (or the Bellman-Ford Algorithm, in the case of negative edges) to find the shortest path from the first to the second, the second to the third, and the third back to the first again, making sure these paths do not cross. If there is no such path, we start over and pick three other random cities until we find a cycle in the way just described which connects them. Such a cycle we would call a subcycle, and it would represent a sub-optimal approximation to the optimal tour (it takes the shortest route across a subset of the cities in the graph).

Then, we rate the edges that were used in this subcycle. Initially, all the ratings associated with all the edges are 0. Each time we find a subcycle, we add k/N to each edge that is used in that subcycle, where k is the number of cities in the subcycle, and N is the total number of cities in the graph.

We then repeat this rating process hundreds or thousands of times, accumulating ratings for the edges that fall in these random subcycles as described above. In our group project, we have 30 seconds to find a solution for an arbitrarily-sized graph, so we just decided to run the rating process described above for 28 seconds, so that many samples could be taken.

All that is left to do now is to take this information and create a solution. This is easily done by finding the first tour through all the cities, trying first the edges with the highest rating. The hope is that this final solution will be a good approximation to the optimal solution.

We have not yet coded this up, but we are in the process of doing so. I'll post here to let the world know how well it worked out. I'm very excited to see how well this works!

I welcome any comments any of you may have on how well this may or may not work. Please feel free to respond to this post!

Tuesday, November 15, 2011

.vimrc Goodies: Literal Lambda

I recently took to programming in racket after starting to take a class from Jay McCarthy (twitter: @jeapostrophe). Racket is a scheme-based language constructed specifically with industrial use in mind, which means it is based on some pretty solid theory while also posessing, like perl and php, a package management system (PLaneT), a JIT compiler, and other goodies that make it a great language to use in the real world.

This language, along with perl (given that 'use lambda' is invoked), allows the programmer to insert an acutal unicode lambda instead of the keyword 'lambda'. DrRacket, racket's native IDE, allows you to insert this character by pressing 'CTRL+\' . After looking at this page and others, I found I could do this in vim, too.

I put the following lines in my .vimrc file (this same entry worked in my _vimrc file, for the windows version, as well):

if has("multi_byte")
if &termencoding == ""
let &termencoding = &encoding
set encoding=utf-8
setglobal fileencoding=utf-8
"setglobal bomb
set fileencodings=ucs-bom,utf-8,latin1

map! ^\ ^Qu03bb

I inserted the special characters '^\' (CTRL-\) and ^Q (CTRL-Q) by using the normal vim way: in insert mode, type CTRL-V, then whatever control character you need to. For example, I inserted '^\' by typing CTRL-V, CTRL-\. For CTRL-Q, it was trickier. CTRL-Q is usually a specially bound command, so in order to insert it I had to use its octal number ordinal: CTRL-V, 017, were the keys I typed to insert that character.

Now whenever I'm editing a file, I type CTRL-\ and out pops a bonefied lambda!

Friday, July 29, 2011

Tag Cloud of Programming Languages Based on Popularity

The following tag cloud was made by creating a text file with a given programing language's name repeated over and over, with one occurrence of the language's name appearing in the text file for each 100,000 results Google yielded for the phrase '"" programming language'. It has some interesting results. Please feel free to comment on its accuracy/inaccuracy, as I am very interested to know exactly how accurate the Google searches are. Enjoy!

Sunday, April 3, 2011

Best Combo for Facebook, Twitter, and Chat

As cool as TweetDeck is, have you ever looked at its privacy policy? yeck. many other similiar applications have similiar disclaimers in their privacy policies or ads all over the place. For windows users, there is a solution.

For chat: Pidgin. Being open source, their's no ads and you're not selling your soul to use it. It works well, and with virtually any protocol (facebook, google, AIM, ...)

For Twitter/Facebook/Digg/RSS/Etc. feeds: MahTweets. It is open source, using Microsoft's new Open Source license. Its interface is clean, it is customizable, and it is a CLIENT. it is not something you need to open your browser to use.

I've been looking for a pair of applications like this for a while. I hope others find this useful.

Fuzzy Logic III: Proof Construction

For those burning with questions on how to actually write a fuzzy logic proof. Even though the law of contradiction doesn't generally hold falor fuzzy logic, the normal rules of fuzzy logic do, with some extended functionality to deal with uncertainty.

I will write about some of the basic rules of natural deduction here. from there, It should be readily seen how the rest of the rules follow.

Before I move forward, it should be noted that last time, we worked with two different systems of logic: minimization (a & b = min(a,b)) and additive (a & b = max(0, x + y - 1) ) with negation being the operation (1-a) and with DeMorgan's Law adopted as an axiom.

Since the things I am about to write about apply to all fuzzy logics, I shall refer to the function which evaluates the certainty of a disjunction based on the certainties of the disjuncts as d(x,y). Therefore, for minimization logic, d(x,y) = max(x,y) (see previous post for more details). The same goes for the function which evaluates the certainty of a conjunct of, say, x and y based on the certainties of x and y themselves. Such a function (which evaluates the certainty of a conjunct) is here written as c(x,y).

Writing Convention

In this document, I will format my proofs as follows:
#   assertion     certainty     justification

So therefore, with fuzzy logic proofs we deal with three columns.

As an example:

1. P     50%     Premise
2. Q     80%     Premise
3. R     30%     Premise

Conjunction  (Combination)

In Fuzzy Logic, Conjunction (P , Q |= P & Q) works quite the same as conventional logic, with the extension that the attached conjunct is simply the same as c(x,y) for the logic with which we are dealing.

Example (taken from the example under "Writing Convention":)

1. P     50%     Premise
2. Q     70%     Premise
3. P&Q   50%     1,2, Conjunction (using minimization logic)
3. p&Q   20%     1,2, Conjunction (using additive logic)

Disjunction (Addition)

The same goes for dysjunction as for conjunction, namely, given p and q, p | q can be attached with the certainty given by d(p,q). P|Q above would therefore take on the values 70% and 100%, respectively. 
When adding any old proposition, say p, one which we are not given the certainty value of, we write , if the proposition to which one attaches p has a certainty of 30%, for example.

Modus Ponens

As I said in my last post, we are not guarunteed the law of contradiction. In conventional logi c, it is sufficient to use Contradiction along with Conditional Exchange () and Conjunction to derive Modus Ponens. This is not the case in fuzzy logic; however, we can use the definition of d(x,y) and c(x,y) to make some significant inferences. In this example I will use additive logic, but it should easily be seen how to generalise this approach to any fuzzy logic.

We have:

1. P         43%     Premise
2. P -> Q    83%     Premise

And so we want to derive Q. we plug P's certainty value in for the formula we use for computing the certainty of disjunction:

d((1-p),q) = .83
d((1-.43),q) = .83

We now substitute the particulars of d(x,y). In our case, this is min(1,x+y):

min(1,1-.43+q) = .83
min(1,.57+q) = .83

It should be clear now that since .83 is less than 1, we know that min(1,.57+q) = .57+q. And so

.57+q = .83
 q    = .26 .
We can now attach Q to our proof above with certainty 26% under additive logic. 

What if P -> Q's value is 1 in this case? We find Q's certainty to be the least certainty that would give 1, adding 'greater than or equal to' next to it:

1. P       83%    Premise
2. P -> Q  100%   Premise
3. Q       83%    min(1,.27+q) = 1 :. smallest q = .83 .

That's how it works: we use the formulas for d(x,y) and what we know about the certainty of the conditional and the antecedent, we can derive the certainty of the consequent.  

Conditional Proof

How do we create conditionals? Easy. Simply assume p to some certainty, derive q to some certainty, and then infer p -> q and assign a certainty value of d(1-p,q).

For example, under minimisation logic:

 1. P  37%      Assumption
17. Q  75%     
19. P -> Q      75%


Although the law of contradiction does not apply in general, it applies at the "end points". In other words, if one can derive p with 100% certainty, and also ~q to 100% certainty, its conjuction will (by definition) have 100% certainty, and one can obtain 0% certainty (100% certainty of the negation) and use this and conditional proof to prove something via reductio ad absurdum.


This work was not intended as a comprehensive treatise on fuzzy logic inference rules, but hopefully I have provided in it a good feel for how it works. Enjoy!

Monday, March 21, 2011

Fuzzy Logic Proofs II: Systems of Logic

In this post I will discuss how fuzzy logic works as opposed to conventional logic.  This will lay groundwork for our discussion of fuzzy logic proofs later. Remember, the motivation for looking at such kinds of proofs is that it provides us with a means of dealing with uncertainty. Perhaps there are other motivations, but I will not mention them here.

I assume basic knowledge of first order logic (and, or, not, only if, ...) and pretty much nothing else. Don't worry, I won't get all academic on you. We'll save that for papers and articles.

Now there's a lot of theory and academic mumbo jumbo surrounding fuzzy logic, but practically speaking anyone can do it. A "fuzzy logic system" basically describes how to deal with "a & b (a AND b)", "a | b (a OR b)", and "~a (NOT a)" when we are, say, only 80% certain a is true and 50% certain b is true. There are two common ways of doing so that I will discuss here.

Defining the last operation mentioned above is the easiest. no matter what kind of logic we use, "~a", the logical complement of a, is always going to be "1-a". This makes sense if look at conventional logic, taking 1 to be "True" and 0 to be "False":

Fuzzy Logic: Conservative Flavor

The first is the most common. For "a & b", where a=.5 (50% certain) and b=.8 (80% certain) it says this: "Well, we know that a is 50% true, and b is more than 50% true, so we are at least 50% certain that the whole statement 'a & b' is true." This conservative attitude contributes to its popularity, and the fact that we take the smallest certainty value to be the value of the whole statement gives it a name: minimization fuzzy logic. So the certainty of the expression "a & b" is the smallest of the two between a and b. You may note that this works for conventional logic:

as for "a | b", we use DeMorgan's Law to define the meaning of disjunction. As "a | b" is equivalent to "~(~a & ~b)", we can just use subtraction and "min" to find out our OR operation. It turns out that the certainty of the statement "a | b" turns out to be the greatest value between a and b. This matches our conservative intuition: "We know a 50% true and b is 80% true, so 'a | b' must be at least 80% true." This is again confirmed by using our conventional "absolute truth" model:

Furthermore, with this (and many other) systems of fuzzy logic, the conventional laws of association, commutation, and transititivity still apply, as well as our faithful friend, DeMorgan's Laws.

Although not generally true of fuzzy logic (though totally true in conventional logic), minimization logic follows the law of idempotence; in other words,

Fuzzy Logic: Liberal Flavor

No pun intended, by the way, in the naming of these flavors. They're both important for our purposes, and the usefulness and limits of each will be demonstrated.

All the cool stuff except idempotence (association, commutation, transitivity, etc.) apply to this logic as well, including the fact that this kind of logic works with conventional logic (it works using only the values 0 and 1).

The next flavor of logic is actually defined, not by its conjunction operation (AND), but by its disjunction (OR). For example, let's say that we know x is 40% true and y is 30%. To figure out how to assign a certainty value to the phrase 'x | y', we simply add their certainties together; therefore, 'x | y' has a certainty value of 70%. 

Wait! what if y was 70% certain? then 'x | y' would be 110% certain!?!? 

Nah. We'll just say that if the certainty of x and y sum to more than 1 (or just barely made it to one), the couple has "made the cut" and we say that their combined certainties are more than enough to make us certain that 'x | y' is absolutely true. therefore, If the certainty of x and y are 40% and 80%, 'x | y' is still 1. That's why this logic system is referred to as bounded addition. We see that this flavor of logic is quite liberal - It takes the certainty of BOTH values into the value of the whole. 

Using DeMorgan's  Laws, we use NOT and OR to find AND.  Conjuction (AND) here has the same kind of "make the cut" attitude: "If x and y's certainties sum to more than one, it's significant. Otherwise, we really can't say much about the certainty of 'x & y' as a statement." In other words: take the certainties of x and y, subtract 1, and you have the value of 'x & y'. Again the question arises, "What if x and y are at 0%?" The answer is in the same spirit as last time. if x+y-1 is less than 0, we'll just interpret that as we are so uncertain of x and y that we have more than enough reason simply to assign the certainty of truth for 'x & y' to be false.

This is also called Łukasiewicz (Loo-KAH-she-vich) logic, after the polish philosopher who first invented fuzzy logic. Yes, ladies and gentlemen - additive logic was the first fuzzy logic out there, invented near the turn of the 20th century.

This logic is cool over other logics because unlike most other logics, Łukasiewicz logic obeys contradiction and excluded middle laws:
"a & ~a"=0%, "a | ~a"=100%.

What? MOST fuzzy logic doesn't obey contradiction? How can people use minimization logic without it? How can MODUS PONENS even work without it?

Oh, but it can. I'll show you how next time. (I said in my first post that I'd save that stuff for my third post, not my second, but I think I'll just jump to the good stuff.)

Friday, March 18, 2011

Fuzzy Logical Proofs

Many decent EE Professors will tell you that Fuzzy Logic is used quite a bit in fuzzy control. For those of you who don't know what that is, congratulations; you're a member of normal society. We will not dwell on that here. Instead, we will focus these Fuzzy Logic energies into logical proofs.

For those logicians out there familiar with normal two-column proofs, such as fitch-style natural deduction proofs or similiar, you will find this somewhat familiar. The difference between fuzzy proofs and conventional proofs is that fuzzy proofs provides us a mechanism to deal with uncertainty.

In conventional First Order Logic, we may for any given statement assign a truth value out of the set of two truth values, True and False. In Fuzzy Logic, there is a "spectrum of truth" wherein a statement can be "20% True" and "80% False". Another (more intuitionistic) interpretation of such a value in terms of using it to form a proof is that we are 20% certain that value A is true. Thus fuzzy proofs allow us to use premises with varying degrees of certainty, and in a chosen interpretation allow us to prove results, perhaps even with higher degrees of certainty than our premises. (Side note: those of you familiar with Zadeh's work may cringe at this, but I will explain what I mean when we get there.)

This post starts a series of blog posts which I will publish on the subject of Fuzzy Proofs as I understand them. Please feel free to make comments and give criticisms for my arguments.

In the first post we will define what "fuzzy logic" herein means, and explore the possibilities of the different logical systems at our disposal, and why we may or may not choose them for our proof or to solve a specific problem. Feedback will be requested here. This is a domain which I am trying to understand better even now.

In the second post we will define what an "interpretation" is, or in other words, we will define how to translate our mathy proof back into english so that it may be used to make decisions, as all useful proofs must ultimately provide use for this end.

In the last post, we will walk through a sample fuzzy logic proof. I will comment on its usefulness, its form, and its weaknesses.

From what I can understand, this field has only been explored by few scholars, most deeming it of little use. But I find it relevant here to examine G. H. Hardy, who said "I have never done anything useful," thinking number theory to be an unimportant part of mathematics. Without him, RSA would not be where it is today. (re-quoted from Algorithms, by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani, p. 41.)