I think that this blog has some good momentum going. Up to this point I've discussed some of the basic terminology of graphs, introduced you to graphs with interesting sounding names, and in my last post I went into a little bit more detail about one particular graph, the so-called caterpillar graph or caterpillar tree. Rather than delve right into a proof of the claims that were made at the end of that post, it would be a lot more thorough and just interesting to address some of the other things leading up to that result. I will do so in the order in which they were stated. Remember that the first claim was the following:
The square of a caterpillar graph contains a Hamiltonian path.
A natural question then is, "What exactly is the square of a graph?" You can think of the square of a graph as the original graph plus some things extra. The extra are edges that you add between two nodes in the graph if they satisfy a certain relation. In total the square will consist of the same set of vertices as the original graph with the relation defining edge connection being that two nodes x and y are neighbors if there is a path of length at most 2 between them in the original graph. That's a bit of a mouthful. Below is an illustration of this whole squaring process. To the left is our original graph which I am calling G. To the right is its square called G^2. Notice that if two nodes are already connected in the original graph then they don't need to have another edge added between when calculating the square.Also notice that the square of a graph will have a loop from each vertex to itself.
Seems simple enough. In computational terms though how difficult is it to compute the square of a graph? From a computational perspective it might be interesting to know if the square of a graph can be computed efficiently (informally meaning in a "reasonable" amount of time). This is a question that I will explore in the remainder of this graph. It is one that I am currently trying to find information for. There are plently of papers online that address the question of calculating roots of graphs, but I don't see any that at least in the title state that they are looking into algorithms for computing powers of a graph. I don't know what the computational complexity is for computing the square of a graph but in the following paragraphs I will describe a simple method for determining a graph's square and talk about its complexity.
Method #1:
One very simple way of determining the square of a graph is by performing some calculations on its underlying adjacency matrix. The adjacency matrix of a graph is a matrix representation of the connections between nodes. The rows and columns correspond to the vertices of the graph. Row i corresponds vertex i and column j vertex j. If an edge exists between vertices i and j then the i-jth entry of the matrix is a 1. Otherwise the entry is a zero. This is the typical definition of an adjacency matrix for a simple undirected and unweighted graph. Below is an example of such a graph and its corresponding adjacency matrix. In this example I have labeled the vertices a through d and the columns and rows of the adjacency matrix correspond to those vertices in the same order from left to right and top to bottom. Notice that since the graph is simple the entries along the main diagonal are all zeros. This just reflects the fact that we are excluding loops.
There is a basic result in graph theory that shows that if you square the adjacency matrix of a graph, the i-jth entries correspond to the number of walks of length 2 between pairs of vertices i and j. I will not go into the proof of that but simply state it as fact in this post. It follows from that result that if you 1. add the entries in the original adjacency matrix to the non-diagonal entries of the square and 2. convert the nonzero entries to the number one you will obtain the adjacency matrix for the square of the graph. This method then gives us a simple way of determining the square of a graph.See if you can follow steps 1 and 2 on the first graph of the post to obtain the adjacency matrix of its square on the right.
Now for the complexity part. In the computation for this method, the two key steps are 1. matrix multiplication, and 2. matrix addition. Out of the two steps the one that takes more time computationally is the multiplication which takes running time around O(n^3) (The O here stands for big-O notation). I say around because the amount of time really depends on which matrix multiplication algorithm you select for squaring the adjacency matrix. The traditional method of multiplying rows by columns is O(n^3). There is a better algorithm for multiplying matrices though that is O(n^log7). It is known as Strassen's algorithm and was noteable for being the first one to break the O(n^3) barrier. The addition of two square matrices just involves adding elements of the corresponding matrix entries and is O(n^2). So using some basic results involving big-O notation we can say that this simple method of squaring a matrix is O(n^3).
Some of you may wonder if a better method for the problem at hand exists. In terms of complexity you may wonder if we can improve on this upper bound in any way. One possibility that I will consider in an upcoming post is to look at the incidence matrix representation of the graph rather than the adjacency matrices that were involved in the above calculations. What exactly incidence matrices are as well as an elaboration on their uses will be discussed in that post. Try to see if you can find or think of faster ways of computing the square of a graph. As I hinted above, the answer may lie in the underlying implementation of the graph. As a side note it turns out there are lots of ways of storing the information of a graph besides the simple matrices discussed above. In terms of their set up they may seem more complex but are well worth the effort in other ways.
So until the next post....keep on graphing! :-)