The Revolution (The Preview)

This week I continued the work on diagram embeddings and, unfortunately, I have discovered that Diagram did not actually work properly. I have written a status report E-mail to Tom, in which I briefly outine the progess. This E-mail (with some omissions) will serve as this week’s blog post, because writing a proper blog post would take me at least three hours, and I would rather code right now, given the limited timeframe.

Unfortunately, I’ve got some, well, ambiguous news.

Remember I told you about hash-randomisation failures in computing
diagram embeddings? Well, it turned out that diagram embeddings was
quite OK, and the problem went as far back as the Diagram class.
Essentially, I have done a really bad job implementing it at the
beginning of the summer: I wanted it to directly store all possible
morphism compositions. However, in that implementation, I didn’t
really store all compositions, but just a part of them; which part I
stored depended on the order in which the morphisms were supplied
(severe facepalm :-( )

I tried thinking of some good fixes, but, as you can easily imagine,
the whole idea of storing all composites has suffered an epic
disintegration in the face of diagrams with cycles. I am really
_really_ astonished at how this has managed to slip by me for such a
long time! :-(

I have spent the first one third of Friday on trying to save the
existing design somehow, by carefully processing all possible edge
cases, but this has very quickly started to be abominable, and, of
course, it didn’t work. So, I have spent the greater part of Friday
on thinking out a new Diagram. I have spent all of yesterday, (all of
Saturday, that is), on implementing this new concept. Basically, the
new Diagram only stores the relevant generator morphisms, but it is
still able to correctly determine and enumerate the morphisms that
belong to it. It is also capable of determining whether it is finite
or not (i.e., whether there are cycles in the underlying directed
multigraph). When asked for all morphisms, the new Diagram yields a
generator which acts correctly both in the finite and infinite cases.
In the finite case it produces all the morphisms of the Diagram. This
is clearly impossible in the infinite case, but Diagram is
sufficiently clever in this case to produce the morphisms in a
BFS-like manner. Intuitively, it will first yield all length one
morphisms, then all morphisms of length two, etc.

I have made some changes to the interface of the Diagram to better
reflect the new internals. Nevertheless, the behaviour in the finite
case is the same as that of the old Diagram (modulo some property
names and minor changes, of course).

One bit of good news that deserves standing out in a separate
paragraph is that I only had to change _one_ line of code in to get it to work with the new Diagram. (Well, I
did drop three other lines, because they were redundant), so this
radical swerve with the Diagram has left the larger part of my GSoC
work unaffected.

Now, I have started cherry-picking the diagram embeddings code, and I
have arrived at a conclusion that Diagram has to be further extended.
(“Extending” means adding something new, not rewriting it again.)
Namely, it is insufficient to know whether the whole Diagram is finite
or not; I really need to know whether a certain hom-set is finite or
not. It’s not that hard to implement, and I’ve got a cool book on
graphs; however, it’s going to require some extra time.

Here comes the most important part of my message: I’m working at the
fullest possible cruising speed (not sprinting yet; that I’m saving
for the last 100m). I won’t obviously have everything done tomorrow,
on Monday; however, I strongly believe that I only need another couple
days to finish the bulk of inferencing. Provided that on Monday we
have what is referred to as _soft_ pencils-down date, I hope that I’m
still OK with the GSoC timeframe. Further, I think I have already
mentioned a couple times that I’m going to have another couple free
weeks after GSoC, during which I will be easily able to finalise
whatever will be unfinished. Do note, however, that I definitely
expect to have inferencing done _within_ the GSoC timeframe.

Conclusion: despite the rather radical direction things have taken in
the last two days, I’m _still_ more or less fine with the timing.

At the moment, you will not be able to see the code I’m working on on
GitHub. The reason is that I’m juggling branches rather ninja-ily
right now, so I don’t really have the most relevant one to push
online, and they are all relatively short-lived. I do expect to get
back to working sequentially today, and once I’ve got there, I’ll push
to ct3-commutativity to reflect the updates.

I’m documenting everything I do in as minute detail as possible. I
think the Diagram class and the embeddings functionality has more
comments than actual code. I expect this to make reviewing and later
maintenance considerably more agreeable. Further, my commits are
almost all rather short, with acceptably long commit messages. There
is one commit that breaks the rule, however: the commit which adds the
new Diagram. It is one relatively large chunk of code, which replaces
the old Diagram with the new one and shows that the old tests still
pass modulo minor changes. I have nevertheless reformatted the
history a bit to make this commit easier to review and, of course, the
code itself is just literally stuffed with comments. All other
commits are much more like my usual ones.

Whenever I’m done with the core parts of inferencing, I will write a proper blogpost.

The Embedding

This week I have started some actual code of the derivation of commutativity of diagrams and implications. The first half of the week has gone to splitting Diagram into Diagram and Implication, as outlined in the previous post. Nothing really unexpected happened during that part, so there isn’t much to say about it, save for the thing that the code has become clearer and better organised. Furthermore, I have gained a better understanding of some corner cases, as well as implemented more robust handling for those corner cases.

The second half of the week was considerably more exciting and thought intensive: it was related to finding diagram embeddings. As it should be clear from the last post, this functionality lies at the foundation of deciding the commutativity of diagrams and implications. In what follows, I will refer to the diagram which we need to embed as to the pattern, and to the diagram into which we need to embed as to the model. This seems to be an almost universally accepted terminology and comes from the fact that finding subgraph isomorphisms often lies at the base of various pattern matching implementations.

I have started by selecting and analysing the excellent paper by J. R. Ullman, [Ullm1976], which describes a very clear way of enumerating all possible graph embeddings. This solution, however, was not exactly what I needed. First of all, the algorithm described in details in [Ullm1976] is actually meant for undirected graphs, whereas one can clearly see arrows in diagrams. Furthermore (a thought that has occurred to me quite late), diagrams, are actually multigraphs, in the sense that there can be more than one morphism between two objects. Yet further, a diagram embedding must preserve morphism properties, in the sense that the embedding must map a morphism in the pattern to a morphism in the model, which has exactly the same properties as the morphism in the pattern.

I attempted to find whether someone has addressed the directed multigraph embedding problem before; however, I haven’t managed to find any references on the Internet, so I started thinking on adapting Ullman’s solution to my case. The first thing I figured out was that I could reduce the directed multigraph embedding problem to a directed graph embedding problem. Indeed, take a diagram and flatten down all multiple morphisms going in the same direction between the same to objects to one directed edge between these two objects. Then construct directed graph embeddings and, for each such embeddings, for each directed edge e of the flattened pattern, construct injective, property-preserving, mappings from the set of morphisms of the pattern, which were flattened to e, into the set of morphisms associated with the edge in the flattened model, to which e is mapped by the subgraph isomorphism. (These mappings are actually property-preserving embeddings in their own right, but I won’t call them so, since I’m good and I understand that the blog post has just become a bit unclear, so to say :-D )

Let’s see an example. Consider the diagram comprising two different morphisms: f:A\rightarrow B and g:A\rightarrow B, where f has the property golden; this diagram is going to be out pattern. Now, consider the model: a diagram comprising three morphisms \alpha:C\rightarrow D, \beta:C\rightarrow D, and \gamma:C\rightarrow D, in which \beta has the property golden. Quite obviously, all of our property-preserving embeddings should map f to \beta, while $\latex g$ can be mapped to either \alpha or \beta. Also note that the flattened pattern in this case is the graph consisting of a single edge (A,B), while the flattened model is another one-edge graph, (C,D). More complex diagrams are treated in a similar fashion: flatten the pattern and the model to directed graphs, find directed graph embeddings, and then find the property-preserving morphism mappings.

There was another slight surprise underway, however. Ullman does describe some of the modifications which will make the original algorithm capable of constructing directed graph embeddings, however, he has apparently forgot to describe one of them. I will give some definitions before going into more detail. Ullman uses A to refer to the adjacency matrix of the pattern, B to refer to the adjacency matrix of the model, and M to refer to the matrix representing a mapping of the vertices of the pattern into the vertices of the model; M_{ij} = 1 means that vertex i in the pattern is mapped to vertex j in the model.

Now, for given A, B, and M, compute C = M (M B)^T. Condition (1) in [Ullm1976] states that, if a_{ij} = 1\Rightarrow c_{ij} = 1, for any vertices i and j of the patern, then M represents an embedding. (As usual, a_{ij} are elements of A and c_{ij} are elements of C). When I tried to actually use this criterion for a directed graph, I found that, apparently, C^T should be used, instead of C. The formal explanation follows. By abuse of terminology, I will use “pattern” and “model” to refer to the flattened pattern and flattened model as well.

Let D = M B. d_{ij} = 1 means that \exists! k . (m_{ik} = 1 \mbox{ and } b_{kj} = 1), where k is a vertex of the model. In other words, this means that the vertex i of the pattern is mapped to a unique vertex k of the model, such that in the model there exists the (directed) edge (k, j). Obviously, if d^T_{ij} is an element of D^T, the role of the indices is reversed, that is: the vertex j of the pattern is mapped to a unique vertex k of the model, such that in the model there exists the (directed) edge (k, i).

Now, c_{ij} = 1 means that \exists! t . (m_{it} = 1 \mbox{ and } d^T_{tj} = 1). Deciphering the meanings of the values of the elements of these matrices, this means that the vertex i of the pattern is mapped to a vertex t of the model, vertex j of the pattern is mapped to a vertex k of the model, such that in the model there is the edge (k, t). Now, suppose there is an edge (i, j) in the pattern. c_{ij} = 1 means M maps i to t and j to k, such that the model contains the edge (k, t). That is, the condition (1) as stated in [Ullm1976] and applied to directed graphs checks that M actually reverses the direction of edges! Therefore, one must actually use C^T to check for embeddings.

Now, since the original algorithm in [Ullm1976] was designed for undirected graphs, this extra transposition did not matter, and I think this is the reason why Ullman does not mention it.

I have implemented all the things I have described so far, so diagram embeddings kinda work :-) I have played with python generators a bit, so the code only produces embeddings on the as-needed basis. I did that because I thought of the situation when any diagram embedding will suffice, but also because using generators has resulted in what I believe to be more elegant code. The code abounds in comments, so I think it shouldn’t be a problem to comprehend for someone different from myself. I don’t have a formal proof for this statement, however, so, I guess, Tom is going to be the test subject for this supposition ^_^

There are still a couple things to do, though. First of all Ullman shows a nice optimisation to his algorithm; it looks pretty simple, so I’ll add it. I will then write a couple more tests, including some crash tests involving complete graphs. I will also have to rename the function which does all this magic from subdiagram_embeddings to diagram_embeddings, for obvious (I hope :-) ) reasons.


[Ullm1976] J. R. Ullman, An Algorithm for Subgraph Isomorphism, J. Association of Computing Machinery, March, 1976, 16, 31–42.

The Reflection about Inference

This week I have been working both on fixing some existing code (including diagram layout) and on better understanding the code that I am going to write next. As far as the fixes are concerned, I have further polished the diagram layout code, including the addition of some pretty printing for DiagramGrid. I didn’t initially expect pretty printing to be useful for this class; however, it turned out that being able to quickly glance at the grid itself was very helpful in certain situations.

Something which makes me very content is that I have finally submitted a fix for the sort key problem for unordered collections. The essence of the problem is as follows. With hash randomisation enabled, the order of Basic.args changes on every run. On the other hand, Basic.sort_key traverses the arguments in the order in which they are stored; therefore, sort keys are dependent on the actual order of the arguments. This has given me trouble when working on laying out diagrams, specifically, in handling groups. The thing is that the group handling code relies on FiniteSet (this maybe isn’t the best idea, but that’s a different story, really :-)): groups are eventually converted to FiniteSet‘s of FiniteSet‘s. To assure stable output, the collection of FiniteSet‘s is sorted. However, due to the influence of hash randomisation on sort keys, this sorting would not actually produce the desired consequences. There was a similar problem in the same block of functionality which had to do with sorting Diagram‘s; the issue there was that a Diagram stores Dict‘s which, being unordered collections, were subject to the same sort key trouble. Pull request #1446 fixes all of these issues and, finally, the diagram drawing code almost always passes all of its tests.

It is worth mentioning that while the fix for the sort key problem was not included in the #1429, I was inclined to classify all problems as related to FiniteSet.sort_key. With the fix in the branch, it turned out that there were some other subtle sorting issues, which I am still fixing.

I have also sent pull request #1440 which fixes the pretty printing of morphisms and diagrams, introduced by myself in #1338. Initially, I would use short Unicode arrows for pretty printing morphisms, but Tom and I have arrived at the conclusion that these arrows look too condensed. I have then chosen to use long Unicode arrows; it turned out however that Unicode characters which span more than one symbol are not rendered consistently across different machines. On my computer, the longer arrow would overlap with the next character in line; on Tom’s, it would not. Aaron has suggested building up arrows out of em dashes and black right-pointing triangles, and this seems to work better, although it still looks ugly with some fonts (e.g., the default font in rxvt-unicode, as reported by Tom).

I have also promised to implement variable-length horizontal arrows. I have decided to postpone this for now, however, in order to better focus on my GSoC project. I will keep that task in mind, however, and will most probably return to it in a couple of days.

As for deciding the commutativity of diagrams, I have run into an unexpected conceptual problem, arising from the fundamental difference between diagrams with conclusions and without conclusions. Before explaining the problem, I will remind the description of these two types of constructions. A commutative diagram is a collection of morphisms (which usually form a connected directed graph) with the property that composing all morphisms along any two paths between any two objects produces the same composite morphism. While being quite general, in category theory it is customary to produce statements like “if there are such morphisms, there exist such morphisms, and the diagram is commutative”. This statement is clearly an implication. The class Diagram is a representation of the second type of statement and contains sets of premise morphisms and conclusion morphisms. Diagram is also conventionally capable of representing simple commutativity if no conclusions are specified.

While I was initially quite comfortable with using Diagram for both types of statements, I am really inclined to considering the creation of two separate classes now. Thus I plan to rename Diagram to Implication and add a different Diagram which will represent what I used to call “commutative diagram without conclusions”. That is, Diagram will hold only one collection of morphisms.

With this separation, it is immediately clear that, in the context of my model, the question “Is this diagram commutative?” actually incorporates two totally different questions:

  1. Is this Diagram commutative?
  2. Is this Implication true and commutative?

Fortunately for me ( :-) ), this newly-discovered separation does not remove the possibility of answering both questions with almost the same algorithm. I will start with question (1) to further stress the difference between the semantics of diagrams and implications.

Consider a collection of Diagram‘s and Implication‘s known to be commutative. (By saying “an Implication is commutative” I will abuse the terminology and mean “an Implication is true and commutative.”) We need to decide whether the target Diagram is commutative. The algorithm I will describe is based on backward chaining and is therefore recursive. A recursive step consists of two stages: the commutativity stage and the inference stage. The goal of the commutativity stage is to decide whether the current version of the target Diagram is commutative; the goal of the inference stage is to see whether applying one of the Implication‘s will make the target Diagram commutative.

The commutativity stage starts with taking every morphism of the target Diagram and putting each of them into its own commutative subdiagram. Now, for each commutative subdiagram, the algorithm will pick a subset of morphisms and will then put the subsets together to form another subdiagram. This subdiagram will then be compared with each of the Diagram‘s known to be commutative. If a match is found, the subdiagram is added to the set of commutative subdiagrams. Then, all possible “absorptions” among the diagrams are performed (i.e., if subdiagram A is a subdiagram of B, the subdiagram A is removed from the collection of subdiagrams (for obvious reasons)) and the iteration returns to its start, where it picks subsets of the new subdiagrams. Since the number of morphisms in the diagram is finite, this process is finite. If, in the end, the collection of commutative subdiagrams contains only the target diagram, it is deemed commutative.

Note that this alrogithm is very similar to one of the methods of finding all prime implicants of a boolean function (we called that Blake-Poretski algorithm at the university, but I cannot find any references on my Internet). I have considered the possibilities of directly converting the commutativity stage to a boolean function minimisation problem, but I haven’t found a sufficiently elegant way yet.

The inference stage exactly follows the idea of backward chaining. For each Implication an attempt is made to find the embedding of the premises into the target Diagram. If such an embedding is found, the corresponding conclusions are added to a copy of the target Diagram and a recursive examination of the modified Diagram is made. The found embedding of one of the Implication‘s plus the added conclusions are propagated down the recursion tree as commutative subdiagrams. The commutative stages of the following recursive calls will take their commutativity for granted.

If one of these recursive calls returns a positive result, this positive result is propagated up the call stack. If neither of the recursive calls returned a positive result, or if no embedding of an Implication has been found in a certain recursive call, a negative result is returned from this recursive call.

Note that it actually was the inference stage that I described in my original GSoC proposal.

Before showing how to answer question (2), I would like to analyse the algorithm idea I have just presented a little bit. One can see that the commutativity and inference stages are very different; so different, in fact, that they are almost independent. Therefore, these two bits of functionality will live in separate pieces of code, and will be later combined to function together. I will start by defining two internal classes, _CommutativityStage and _InferenceStage which will host the corresponding functions. The code that will actually combine the two will either be a global function or a class; this will be clearer later and is not important at the moment.

Question (2) now: “Is the given Implication true (and commutative)?”. In this case, one should start from the premises of the given Implication and apply the same strategy as in answering question (1). Here, however, the terminal criterion is that the target Diagram (obtained from the premises of the original Implication) is commutative and contains the conclusions of the original Implication.

A remark about comparing diagrams is due here: this is nothing but the subgraph isomorphism problem. I have already found this paper (haven’t read it yet), but I’m open to other paper suggestions in this regard :-)

EDIT: I have decided to not follow this article and instead focus on a more basic solution. Should the need occur, I will implement this (apparently) more efficient version.

It is necessary to keep in mind that, besides finding the subgraph isomporphism proper, the code will have to pay attention to morphism properties as well.

Now, the most attentive readers might have already remarked that semantically splitting the class Diagram into two will impact diagram drawing. Yet, the impact will be rather modest, since the drawing code already knows how to deal with something similar to Implication; adding explicit support for new Diagram is going to require minimal effort.

In this blog post, I recognize that my initial class model was flawed in yet another place. I try to see this is as a sign of personal progress, though :-)

The Polish and Further Planning

I am currently working on getting my two pull requests into master. Right now, the efforts have been concentrated on the first pull request, concerned with diagram layout. Among the minor fixes, there came a number of more important changes, which I am going to shortly describe in this post, before I get to my further plans.

Among the changes worth mentioning are some updates to the choice of internal data structures of DiagramGrid. Previously, FiniteSet was used to store any sets. Following my mentor’s suggestion, though, I have refactored the code to only use FiniteSet when something needs to be stored in Basic.args. On all other occasions, the built-ins set and frozenset are used, depending on whether a mutable set or an immutable hashable container is needed.

The other change bearing no fundamental importance but still worth mentioning is the transition to storing undirected edges as two-element frozenset‘s. Previously, edges were stored as two-element tuples which caused a bit of hassle in what concerned recognizing the equality of (A, B) and (B, A). The choice of frozenset has brought in more elegant code. In terms of performance, I do not think that this transition has had a really important impact, since I didn’t really keep performance in mind when writing other parts of the code anyway. (I am mainly referring to the construction of the skeleton of the diagram and splitting it into triangles.)will l

Among more significant improvements, I will list the support for disconnected diagrams, one-object diagrams, and, the pinnacle, graceful handling of the situations when growing a pseudopod fails. Before you start thinking abut who the hell would need disconnected or one-object diagrams, I will remind/introduce the process and the necessity of pseudopods in diagram layout (Hey, that did sound like rubbish, did it :-D )

The layout algorithm essentially considers the slightly augmented underlying undirected graph of the diagram and splits it into as many triangles as it can. Then it tries to sort those triangles according to some rather heuristic (i.e., arbitrary) priority metric, picks them one by one, in this order, and tries to place them on the plane, edge to edge. The strategy being pure greedy, at some point in time it may happen that there are still triangles, but there are no free edges on the plane to which they could be attached. In this situation, the algorithm attempts to attach one of the remaining triangles by a vertex, that is, it tries to find such a vertex already in the plane, which also belongs to one of the remaining triangles. Finally, the algorithm adds an edge of the found triangle to the plane and restarts the process of picking triangles and attaching them by edges. This new added edge is referred to as pseudopod.

Now, what happens when a pseudopod cannot be grown? Initially, I was under the impression that it is rather hard to construct such a diagram. However, it turned out to be rather easy. Consider the set of objects \{A\}\cup\{A_i\mid 1\leq i\leq 10\} and the set of morphisms \{f_i:A\rightarrow A_i\mid 1\leq i\leq 10\}. DiagramGrid will lay out the first 8 of the A_i quite all right: as one would expect, A gets to be the center of a 3×3 square, whose borders consist of the 8 A_i‘s. However, the last two A_i‘s condition the situation when there are triangles left, but no pseudopods can be grown.

In an attempt to address this problem, I have considered various possible strategies, and have chosen the following one. When no pseudopod can be grown, take the set of all objects that have not yet been placed and construct from them a subdiagram of the diagram to plot. Lay that diagram out recursively and then attach the resulting grid to the initial diagram.

One important remark is due here. This strategy is a “oh, gosh, things have gone very bad” strategy, in that is is applied only when all other approaches have failed and in that it does not really guarantee the nice look of the final diagram. However, it does provide a graceful handling of the specific situations and I do believe that the output is still going to look acceptable.

While the idea itself is rather simple, it is necessary to pay attention to what subtleties it actually brings around. First of all, the subdiagram constructed from the remaining objects is not necessarily connected. That’s easy to see even in the example I have shown in the previous paragraphs. Furthermore, the constructed diagrams do not necessarily have non-loop morphisms! (By abuse of graph theoretic terminology, I call a morphism with the same domain and codomain a loop morphism). That is, addressing pseudopod extension failures brings about the necessity to handle disconnected diagrams and one-object diagrams.

There is not much to say about the support of disconnected diagrams and one-object diagrams, but that I have implemented support for these two cases as well. The latter case is handled trivially, while the former case employs standard depth-first search of the underlying undirected graph and separate layout of the connected components. The components are currently dumbly positioned side by side, in a line, and a comment in the source code evokes the possibility of using groups to get a different layout. I’m open to suggestions of further improvements in this area, though.

It’s time to speak about my plans. I have spent more than initially expected on handling pseudopod growth failures. This means that there are still some suggestions by my mentor waiting to get fixed (I haven’t read them yet; hopefully, nothing fundamental there :-D ). Further, I absolutely must fix the problem with the sort key of FiniteSet. I have been talking about fixing it for about two weeks already, and it doesn’t seem to require that much effort. It is essential that this fix be done, though, since, without it, tests in the categories module fail half of the time. Finally, I will fix how morphisms are currently pretty printed by removing the use of wide Unicode symbols. These activities will not hopefully take me more than 2 days, at the very most.

Next comes the other exciting part of my project, deciding the commutativity of diagrams. I have provided the general idea of the algorithm in my proposal. Given that I am currently about two weeks behind the schedule in my proposal, and that I will still need to spend time on getting the code in the pull requests up to snuff, I’m really feeling very wary about planning my own time. However, since deciding the algorithm for deciding the commutativity of diagrams I describe in the proposals seems to be rather straightforward, I think I will have at least a basic working version of it two weeks from now, that is, by August 5. Allowing another week as buffer time and yet another week for merging the corresponding pull request, I do expect to be in time for the firm pencils-down date.

One last remark to make is that after the official end of the GSoC timeframe, I will still have at least one week of rather spare time (I actually expect to have about 2.5 to 3 weeks of time), which means that I will bring the code to a sufficiently polished state despite any possible lags.

The Almost There (Chapter 1)

This week I have come ridiculously close to finalising the second major part of my GSoC work: diagram plotting. Before submitting the two pull requests, I only have to add proper docstrings to a couple classes and to integrate the plotting with sympy.printing.preview for easier use. (Well, there also is a FiniteSet-related issue, but I hope to be able to fix it more or less swiftly.) The main functionality is ready, however, and that gives me hopes :-)

In terms of material progress, this week has been rather unimpressive: I haven’t written that many lines of code and that which I have written has introduced seemingly inessential changes to the aspect of the diagrams. Nevertheless, I think this week can be marked as one of the most thought-intensive.

In the beginning of this week I have extended the drawing of curved morphisms to take into account the situations when there are multiple morphisms between the same two pair of objects. This allows to automatically typeset diagrams such as Diagram 1.

A diagram with multiple curved morphisms.

Diagram 1. Multiple curved morphisms.

The next two days I have been smashing my head against the simplest approach to the problem of positioning morphisms labels such that they don’t get intersected by morphisms. The upsetting part is that, despite the amount of thinking I have done and the amount of code and comments I have written, the actual output hasn’t really got much better. (It may be considered to be a success, though, that it hasn’t got much worse either :-) ). Consider Diagram 2. No special care about the position of the labels is taken.

Diagram 2.  Arbitrary placement of morphism labels.

Diagram 2. Arbitrary placement of morphism labels.

Now consider Diagram 3; notice that the labels are now on the outer sides of the diagram. The trick basically (very basically) consists in detecting those morphisms which form the outer edges of the diagram and then placing their labels on the proper side. While for vertical and horizontal morphisms the procedure is pretty straightforward, for diagonal morphisms I resolved to apply some basic ideas from analytic geometry and yes, I even do floating-point computations (although I of course try to do them as little as possible). Nevertheless, the approach I have implemented feels very far from perfect. I hope though that I have managed to achieve some balance between code that works sufficiently fast and well and code that is intelligible. Note that the positioning of the labels of the morphsisms which are in the bowels of the visual structure of the diagram remains pretty arbitrary, which may sometimes get ugly.

Diagram 3.  Explicit positioning of labels of outer morphisms.

Diagram 3. Explicit positioning of labels of outer morphisms.

The next feature I have added is the possibility to draw “loop” morphisms, i.e., morphisms which have the same domains and codomains. While proper layout of such morphisms is not guaranteed for very crowded diagrams, this functionality is of some use, as can be seen in Diagram 4.

Diagram 4. Typesetting of loop morphisms.

Diagram 4. Typesetting of loop morphisms.

Finally, I have implemented the support for custom arrow formatters. Arrow formatters are associated to morphisms properties. Whenever a morphism with some properties is typeset, after the necessary thinking has been carried out, the resulting data is passed to the formatter. The formatter is free to modify anything it wants in order to influence the appearance of the arrow. A common usage is shown in Diagram 5. This effect was achieved with a two-line formatter.

Diagram 5. Use of formatters

Diagram 5. Use of formatters

I must confess that dealing with hash randomisation-related issues takes up much more time that I always expect. I have been constantly getting back to certain bits of my code and adding new and new invocations of sorted to assure stable output. Working on this, as well as on some obscure tuning of small details of the diagram is actually what has consumed the bulk of my time this week. The visual input of such modifications is usually minimal; yet, I do believe they bear a rather important role.

The First Tangible

I have the preliminary pleasure to announce the first more or less tangible fruits of my work: some real diagram plots. I will adjourn this part of the show-off until the end of the post, however, so that more people read the whole thing :-)

The work I have done this week belongs to two essentially different classes. In the first half of the week I was still adding new features to the layout functionality; this is mainly about logical groups, which are a way for the user to exercise some control over how the objects are laid out. The idea behind offering the user some control over the layout is that DiagramGrid cannot always guess which placement of objects would be the best for the user, additional input would be very welcome on such occasions. It was important though to require this input in such a way as to demand as little effort as possible on the part of the user.

Take a look at the definition of a pullback. In this diagram, one can easily see that the objects P, X, Y, Z are distinctly separated from the object Q. It is easy to note further that many of universal properties also rely on such semantic grouping of objects. Further yet, in a lot of diagrams, logical groups are easily seen, as it happens in the case of the five lemma. These observations have made me think that allowing logical groups would be a perfect addition to the existing automatic layout functionality.

In the current implementation, the user specifies the logical groups by supplying a set of sets of objects. Thus, in the case of the five lemma, the user can supply the set \{\{A, B, C, D, E\}, \{A', B', C', D', E'\}\} as the description of the groups that can be seen in the diagram. In the case of the pullback, the following set would be a valid specification: \{Q, \{P, X, Y, Z\}\}. Note how it is not necessary to include separate objects in singleton sets.

One more detail on how the groups are handled, about which I am not yet very sure, both in the meaning of utility and test completeness, is that groups can be nested to arbitrary depth. This is because the current procedure of handling groups is as follows: all supplied elements of the supplied set of groups, which are sets, are considered as diagrams in their own right and are laid out. The algorithm does not really look into the structure of such a group before going into recursion. After having laid out each of groups, the current implementation constructs a dummy diagram, starting from the original one in the following way. The new diagram has as objects all objects and groups included in the groups set. If between the objects of any two groups there exists a morphism, it is added to the new diagram. Duplicate morphisms of this kind are essentially omitted for efficiency reasons. This new dummy diagram is laid out. Then, in the resulting grid, the cells (and the respective rows and columns) which correspond to groups are expanded to hold the grids into which the groups have been laid out.

The procedure I describe is rather simple, which makes me believe that it is also robust. Nevertheless, I cannot say I am fully satisfied with how much I have tested it; somewhat pessimistically, I do expect quite a number of bugs.

While logical groups seems to a be rather powerful feature already, it does not help get the layout of the five lemma just like it is usually laid out. Moreover, I noticed that the current algorithm would never lay out a line diagram in a line. To avoid this I have added a hint to the layout functionality which would instruct the usage of a different layout algorithm, and which would result in a more line-like shapes. This algorithm essentially does a depth-first traversal of the underlying undirected graph of the diagram and places the objects according to their distance from the root. Now, since I wanted to see such a layout applied to some logical groups as well (cf. the five lemma), I have implemented the possibility to supply separate layout hints for each group. With these instruments at hand, as well as with the simple hint transpose which instructs the layout engine to eventually flip the grid about its main diagonal, it is possible to give DiagramGrid sufficient information to have the five lemma laid out properly.

While I think that this result is pretty satisfactory, it would of course be way cooler to have DiagramGrid guess such stuff automatically. It is a rather nontrivial task, however. Take the five lemma, again. It can be laid out in a line, and look not that bad. It seems very hard to decide automatic criteria to answer such questions. Yet, I expect that nice results can be achieved by diversifying the controls offered to the user in what concerns the layout.

I am also considering the possibility of a partially manual layout (and, eventually, a fully manual one), by specifying the exact initial positions of some of the objects. I believe this would be very useful in the long run, because it would offer a great deal of control, and wouldn’t still require much unnecessary effort on the user side. The implementation of such a thing is however still a matter of the future.

I have finished the first half of the week by documenting the layout algorithm in the source itself, as well writing proper docstrings for DiagramGrid and its methods.

The second half of the week I have spent working on producing actual Xy-pic code. As expected, the most difficult part lied in drawing bent arrows. I have implemented some basic routine to solve this question. The pinnacle of the currently available functionality is shown in the figure.

The automatic plot of a moderately complex diagram

There is a bit of work left, however. First of all, notice how the arrow curving from A_7 to A_8 is too close to the objects it passes by. I plan to solve this by increasing the default amount of curving. Further, nothing will currently be done if another morphism between these two objects is added, which results in ugly overlaps. Yet another important problem is positioning of the labels of the morphisms: some of them are too loose in space and it is hard to understand which arrow they belong to. The label f_{10} is even crossed by the arrow f_7. Unfortunately, I don’t expect to solve all of these problems with positioning of morphisms labels, because, in many cases, it would be necessary to know the actual graphical information, which is impossible at the current level of abstraction.

Thus my immediate plans are to fix the problems I have enumerated to as a complete extent as possible, and to then submit the layout functionality and the actual drawing functionality as two pull requests, to facilitate review.

The Layout

I haven’t posted in quite some time, because I have been totally absorbed by implementing the functionality to lay out diagrams. I am painfully close to having finished it; however, I’ve been that close to it for a couple days already, and that’s something which drives me crazy :-)

OK, it’s time to take a general look over what has been done. In the end, I will describe my short-term further plans a bit.

First of all, the goal. When you want to lay out a (commutative) diagram, you should really aim at grid layout. This is how people normally typeset diagrams in articles, and this is the thing the semblance of which I would be happy to achieve. The resulting grid layout is one of the traits of diagrams which make the task of automatically drawing them different from the task of automatically drawing a graph. The other specific feature is that, when you get a diagram, you can (and you should) actually throw away those morphisms which are not really interesting. In the following sections, I will try to describe the philosophy behind the functionality I have implemented, for each bit in part.

The first stage of the algorithm is to remove the uninteresting morphisms. At this stage, those composite morphisms which have no properties are discarded; identity morphisms without properties are discarded as well. In fact, this corresponds pretty well to how people draw diagrams. This first stage ends by merging the premises and the conclusions of the diagram into a single container. This is because, at drawing, the distinction between premises and conclusions is not important at all, since all interesting morphisms should make their way into the final picture.

At the second stage, the algorithm abstracts morphisms away, in favour of unoriented edges between objects. The code builds the so-called skeleton of the diagram (that’s an ad-hoc name). The skeleton is a graph which has all objects of the diagram as vertices. Between two vertices of the skeleton there is an undirected edge, if these objects are connected by an (interesting) morphism. Notice how we discard the direction of the connection. After all edges corresponding to morphisms have been added, the skeleton is further completed in the following way. An edge is added between any two objects A and C for which there exists and object B such that A and B are connected with an interesting morphism and B and C are connected with an interesting morphism. This is not the transitive closure of the graph, it is only the first step of it. The new edges are dummy edges, in the sense that they may not correspond to interesting morphisms.

The next stage is the first key stage of the algorithm. The skeleton is tesselated into triangles, which will eventually be used to get as many right angles in the layout as possible. Here is when the dummy edges come into play. Their presence assures that the diagram can be completely split into triangles. For those who have read my proposal, I will remark that all the stages of diagram analysis I described after laying out the triangles are actually unnecessary, namely because of these dummy edges, which guarantee that we have sufficiently many triangles. Yet, dummy edges are indeed dummy, in the sense that most of them will not appear in the final diagram. This makes the triangles we find in the skeleton unevenly interesting to us. Triangles which have more than one dummy edge are totally extra, because they would distract the attention of the code from triangles with more meaningful edges and would mess things up, generally. Therefore, such triangles are immediately dropped.

Once the “triangulating” stage is complete, the core of the algorithm comes into play. Basically, the idea is to pick one of the triangles, pick one of its edges and put it on a grid, horizontally, remembering that it is in the fringe. Then, iteratively, “weld” interesting triangles to the fringe, eventually placing all objects of the diagram on the grid. This part is the trickiest part of the whole algorithm, so prepare to hear a talk about a lot of magic :-)

Any triangle is placed on the grid as a right-angled triangle, with the perpendicular edges being horizontal and vertical. This assures that we keep “quite” close to the desired grid layout. Whenever a triangle is placed on the grid, the objects which form its vertices are recorded as already placed. Then, those triangles which only contain objects which have already been placed (uninteresting triangles) are dropped. That is, once the places of some objects are decided, those objects are never considered again. This may constitute a point of future improvement, of course, because objects are often drawn in several copies to make the diagram look clearer.

When placing the triangle on the grid, the code attempts to assure that as many interesting (not dummy) edges as possible will be drawn horizontally or vertically. There is some dark magic in the code which detects such situations, but I hope that after following the trail of comments and just reading the code itself, the whole thing should become rather clear.

Now, since the algorithm is essentially greedy, there can be situations when all edges to which the remaining triangles could have been welded, have already been positioned inside the structure and it is now impossible to find the welding edge. In this case, the algorithm attempts to attach a triangle to the existing structure by a vertex only. If such a possibility is found, an edge (the pseudopod) of the triangle is placed as vertically (or horizontally) as possible and then the welding process can be continued, since there already is a welding edge.

Let’s now focus on what happens to the fringe. When a new triangle is welded, the two new edges are added to the fringe. No edges are deleted however, because the welding edge might still have some free space to its other part. Edges are deleted from the fringe only when they are detected as possible welding edges, but when the algorithm finds that there is no space around them actually. I have considered several possibilities of correcting the fringe on different occasions; my conclusions so far have been that it’s not generally worth it, performance-wise. This question however should be better investigated, including doing some complexity analysis.

You might have noticed that I do not in any way treat the situation when a pseudopod cannot be grown. I have not encountered such situations during the testing yet, so I decided not to attempt to handle them before I have actually seen an example. Taking into consideration that I am going to work with diagrams rather intensively later, if such situations are possible, I will indeed run into them. I must confess that I haven’t considered the problem theoretically yet, though :-)

The description of the essential parts of the algorithm is complete here, so I’m passing over to the overview of the remaining problems and further short-term plans.

Tom Bachmann, my mentor, has suggested that I describe the steps of the algorithm in the docs which come with the source code. I will do this shortly. I believe it is essential to write such documentation as soon as possible, despite the abundant (hopefully) comments in the code.

My immediately next task is, however, producing the actual Xy-pic code. I expect that getting this done at a basic level shouldn’t be hard. However, drawing longer morphisms and avoiding intersections for as much as possible may prove a rather hard task to achieve.

Oh, and the almost forgotten conclusion: I now essentially have the core of the automatic diagram plotting functionality, since laying out objects is the most difficult part of the affair.