Implementation of the Kauffman Polynomial in Python
This blog article was converted from the following Typst report and the source code of the project is on my GitHub
Report: Latest version of the PDF report
Source Code: https://github.com/aziis98/kauffman-polynomial
Introduction
The Kauffman Polynomial
The Kauffman polynomial L is a two-variable polynomial invariant of regular isotopy for unoriented knots and links in 3-dimensional space. It is defined using Skein relations, more precisely an implicit functional equation.
The defining axioms of the Kauffman polynomial are the following, given a link diagram K we have L_K(a,z) \in \mathbb{Z}[a, a^{-1}, z, z^{-1}] and:
-
If K and K' are two equivalent up to regular isotopy, then L_{K}(a,z) = L_{K'}(a,z).
-
We have the following identities:
L(\includegraphics[height=1em]{/articles/kauffman-polynomial/skein_over.png}) + L(\includegraphics[height=1em]{/articles/kauffman-polynomial/skein_under.png}) = z (L(\includegraphics[height=1em]{/articles/kauffman-polynomial/skein_h.png}) + L(\includegraphics[height=1em]{/articles/kauffman-polynomial/skein_v.png}))L(\includegraphics[height=1em]{/articles/kauffman-polynomial/skein_unit.png}) = 1L(\includegraphics[height=1em]{/articles/kauffman-polynomial/skein_over_twist.png}) = a L(\includegraphics[height=1em]{/articles/kauffman-polynomial/skein_strand.png})L(\includegraphics[height=1em]{/articles/kauffman-polynomial/skein_under_twist.png}) = a^{-1} L(\includegraphics[height=1em]{/articles/kauffman-polynomial/skein_strand.png})
We will later be seeing that the Kauffman polynomial can be defined in a more explicit way, using a closed form recursive definition that we will be using to derive the first approach for our algorithm.
Computational Knot Theory
The first problem in computational knot theory is to find a good representation for knots and links. There are various representations in the literature, such as:
- Planar Diagram codes, PD codes — This is the one we will be using as input for the algorithm of this project. The PD code for a link is generated by labelling each arc of the link with a number. Then we choose a starting point and walk along the link, when we pass at an over-crossing for the current strand we write down a 4-uple of counter-clockwise numbers for the arc incident to the crossing. This has a few downsides when used for knot/link manipulations.
- Gauss codes — This is very simple to generate, we just need to label each crossing with a number and then write down the sequence of numbers in the order they appear by walking along the knot. This has the problem that it is not unique, for example it does not distinguish between the trefoil knot and its mirror image.
- Signed Gauss codes, SG codes — This is an improvement over the previous representation, we also store the handedness of the crossing. This is the representation we are going to use for the various operations on the knots.
- Dowker-Thistlethwaite codes, DT codes — This is another, more compact, representation present in KnotInfo and is based on crossing labels. We label each crossing (twice) in order from 1 to 2n. Each crossing will get an even and odd label and we take only the even labels in order. When an even number is in under-strand position we mark it with a minus sign.
There are also other codes like Braid representations we will not be using in this project.
PD codes
For our program we are going to use the P.D. code as input for our algorithm so let’s explain how it is derived from a knot or link.
The main source for the section is the article on PD notation from KnotInfo. The PD code for a link is generated by labelling each arc of the link with a number. Then we choose a starting point for each component and process each component in order.
For each component we walk along it from the starting point in the component orientation. When we pass at a crossing that is an over-crossing for the current strand we write down a 4-uple of counter-clockwise numbers for the arc incident to the crossing starting from the entering under-crossing arc.
Algorithm: The input is an oriented link diagram with starting points on each component and the output is a list of 4-tuples.
Choose an ordering for the components and starting point for each component
Label each arc with a number
For each component:
Walk along it from the starting point in its orientation
At each crossing, when at an over-crossing for the current strand: write down a 4-uple of counter-clockwise numbers for the arc incident to the crossing starting from the entering under-crossing arc
Let’s see an example of how to construct the PD code for the following link diagram
First let’s choose a starting point for each component and label accordingly the arcs of the link. We will use the following convention:
Now we can start processing each component of the link by walking along it in its orientation and writing down the over-crossings we encounter.
Every directed crossing appears only once as an over-crossing so this algorithm terminates when all crossings have been visited. So the final PD code for this link is
SG Codes
Gauss originally developed a notation called Gauss codes based on labelling each crossing of a knot with a number and keeping track of when we walk an over-crossing or an under-crossing using a sign. This produces a list of numbers where each number appears exactly twice with different signs. This has a few problems like the fact that this doesn’t distinguish a knot vs its mirror.
This is solved by Signed Gauss Codes where we also store the information about the handedness of a crossing.
More precisely this is constructed by the following: for each component we walk along it, when we pass at a crossing we write a tuple (\pm i, \pm 1) where the first component is the index of the crossing with a sign indicating if this is an over-crossing or under-crossing, the second sign (that can be added in a second pass over the loop) is given by the handedness of the crossing using the following convention:
Algorithm: The input is an oriented link diagram with starting points on each component and the output is a list of components where each component is a list of pairs of numbers
Label each crossing with a number in order
For each component:
Walk along it from the starting point in its orientation
At each crossing with label i, write a tuple with components
+i or -i if this is an over-crossing or under-crossing
+1 or -1 if this is a right-handed or left-handed crossing (Note: original convention might be specific)
Converting one code to the other is fairly easy as one just need to first do a labelling step to convert crossing labels and then convert the over/under-strand and left/right-handedness relations between the two notations.
Comparison of PD and SG codes
We will now see how SG codes are better suited for the manipulations (switching and splicing) we need to do on a link. Indeed the KnotTheory Mathematica package when computing the Kauffman polynomial converts the input link from PD code into SG code as this is better suited for doing manipulations directly on the crossings.
Operations on PD and SG
Switches
-
SG codes — To do this we need to swap the two over/under signs for the crossings for each of the two occurrences of the over and under strand. Then we also need to flip the handedness of the crossing as the switch causes an handedness change.
-
PD codes — This is more involved and requires cycling the crossing from (i, j, k, l) to one of (l, i, j, k) or (j, k, l, i) based on the crossing sign and relabelling the whole affected components as PD codes heavily rely on the sequentiality of the indices to tell the direction and end of a component.
Splicing
-
SG codes — This is just a matter of splitting, reversing and rejoining lists correctly, this is a bit involved and will be covered more in depth later.
-
PD codes — This can be approached in various ways more or less performant. One can add a meaning to pairs (i, j) symbols to the original sequence of 4-tuples called “path” elements (the KnotTheory package has this extension of the PD notation with the
P[i, j]
element) that tell we joined the arc i with the arc j. This gives an heterogeneous list of elements that is more complex to handle efficiently in classical programming languages different from Mathematica. Another approach to keep list homogeneous is to manually splice the crossing and do a relabelling of all the arcs at each step. This causes in the worst case a continuos relabelling of all the crossings in the link at every splice and is not very efficient as our algorithm needs to do splices very often.
Comparison of the two representations
Another downside of PD codes is the ordering of the crossings in-memory, walking along a component might require various jumps along the list. SG codes on the other hand have already each component in the correct order and can be walked linearly without jumps.
Finally SG codes are also more “space efficient”. Let N be a number of bits to
encode a natural number of maximum fixed size (e.g. N = 8 for using uint8
to
encode numbers, this would be ok for knots/links for up to 128 crossings), for a
link with n crossings and k components:
-
PD codes use \approx 4n \times N bits of information, four numbers for each item.
-
SG codes use \approx 2n \times (N + 1.5) + k \times \lceil\log_2(n)\rceil bits of information. Each crossing appears twice and we store its id, over/under and handedness (each pair has the same handedness so we can store it only once per crossing) information, we also need to store the structure of the list with k \times \lceil\log_2(n)\rceil more bits.
So PD codes are simpler and compact to store (and generate from a diagram) but SG codes are more space efficient and easy to manipulate.
Link reconstruction from code
We briefly mention that reconstructing a link from a PD or SG code is not trivial and there are various approaches used by various softwares that can be used for this task.
Linear Integer Programming
For example the
KnotTheory package in Sage
has a
Link.plot()
method that thats a link that can be constructed using a PD code and then plots
it as follows:
# The "monster" unknot
L = Link([[ 3, 1, 2, 4], [ 8, 9, 1, 7], [ 5, 6, 7, 3], [ 4, 18, 6, 5],
[17, 19, 8, 18], [ 9, 10, 11, 14], [10, 12, 13, 11], [12, 19, 15, 13],
[20, 16, 14, 15], [16, 20, 17, 2]])
L.plot()
Sage internally uses a mixed integer linear programming (MILP) solver to generate a knot diagram from a PD code. Another library called Spherogram instead uses network flows. The problem solved here is finding an orthogonal presentation for the link with the minimum number of left and right bends.
Planar Graph Embeddings
Another approach used by KnotFolio is based on Tutte embeddings. A Tutte embedding or barycentric embedding of a simple, 3-vertex-connected, planar graph is a crossing-free straight-line embedding with the properties that the outer face is a convex polygon and that each interior vertex is at the average (or barycenter) of its neighbors’ positions.
This condition that every point is the average of its neighbors can be easily expressed as a system of linear equations where some points on a chosen outer face have been fixed. When the graph is planar and 3-vertex-connected the linear system is non degenerate and has a unique solution.
Computing the Polynomial
We tried two approaches for our algorithm, the first based on the closed form algorithm present in Kauffman’s paper and another “naive one” based directly on applying the skein relation. Let’s first recap the main formal closed form algorithm for computing the Kauffman polynomial.
Definition. Let K be an un-oriented link. We denote by \hat{K}(p) the standard unknot for K where p is a directed starting point. This is built by considering the planar shadow U of K and walking along U starting from p and making each crossing an over-crossing when passing on it the first time.
Definition. Let’s give a name to the following knot modifications for K near a specific crossing i:
First Approach
Let now K be an oriented link with n components so K = K_1 \cup \cdots \cup K_n.
Definition. Let K and \lambda = (\lambda_n, \dots, \lambda_0) a sequence of indices of crossing of K and let i be an index of one of the crossings, let’s define the following operations:
-
A_i^\lambda K := E_i S_{\lambda_i} \dots S_{\lambda_0} K
-
B_i^\lambda K := e_i S_{\lambda_i} \dots S_{\lambda_0} K
-
Then let \lambda be a sequence of indices that bring K to \hat{K} so that \hat{K}(\lambda) := S_{\lambda_n} \dots S_{\lambda_0} K and define
\sum_K (\lambda) = \sum_{i=0}^n (-1)^i (L(A_i^\lambda K) + L(B_i^\lambda K))\Omega_K (\lambda) = (-1)^{|\lambda| + 1} L_{\hat{K}(\lambda)} + z \sum_K (\lambda)
Closed Form Algorithm. Here follows the algorithm that computes L_{K}(a,z).
- If K = \hat{K} is a standard unknot then L_K (a, z) := a^{w(K)}
- If K_1 overlies K_2, let d := (a + a^{-1}) / z - 1 and then
L(K_1 \cup K_2) := d L(K_1) L(K_2)
- If K = K_1 \cup \cdots \cup K_n
- If a K_i overlies another component than apply (ii).
- If no K_i overlies all others let p_1, \dots, p_n be directed
starting points on K_1, \dots, K_n and let \bar{p}_i be the same
directed starting point with the opposite direction of p_i on K_i.
Let \lambda(p_i) the sequence of under-crossings of K_i with
K - K_i so that \hat{K}(\lambda(p_i)) = K_i \sqcup (K - K_i) so that
K_i overlies the rest of the components. At this point we can define
L_K as
L_K (a, z) := \frac{1}{2n} \left[ \sum_{i=1}^n \sum_{q=p_i, \bar{p}\_i} ((-1)^{|\lambda(q)|+1} d L_{K_i} L_{K - K_i} + z \sum_K (\lambda(q))) \right]
- If K is a single component then let p be a directed starting point
on K and \bar{p} the one with opposite direction. Let \lambda(p)
the switching sequence that brings it to the standard unknot \hat{K}
and define
L_K (a, z) := \frac{1}{2} \left[ \sum_{q = p, \bar{p}} ((-1)^{|\lambda(q)|+1} L(\hat{K}(\lambda(q))) + z \sum_K (\lambda(q))) \right]
Second Approach
After a first implementation of the algorithm we noticed that it is not very efficient to compute using this closed form algorithm. In this case the naive implementation of the algorithm of just applying the rules recursively is more efficient and we only use the closed form for handling the base cases and the case of multiple components. The final algorithm we implemented is the following:
Algorithm: We apply the following rules recursively
- If K is a standard unlink then return a^{w(K)}.
- If K has groups of components that overlap each other then we can apply the following rules, let K = K_1 \sqcup \cdots \sqcup K_n be these groups of components one overling the other and let d = (a + a^{-1}) / z - 1 then L(K_1 \sqcup \cdots \sqcup K_n) = d^{n-1} L(K_1) \cdots L(K_n) This is borrowed from the closed form algorithm and let’s us solve the case of disjoint components.
- If K is a linked component then we find \hat{K}, the standard unlink for
K, and pick the first available crossing c to switch, then we have some
cases based on the crossing over/under type and handedness that can be
reduced to the following two cases
L(\includegraphics[height=1.2em]{/articles/kauffman-polynomial/skein_over.png}) = z (L(\includegraphics[height=1.2em]{/articles/kauffman-polynomial/skein_h.png}) + L(\includegraphics[height=1.2em]{/articles/kauffman-polynomial/skein_v.png})) - L(\includegraphics[height=1.2em]{/articles/kauffman-polynomial/skein_under.png})L(\includegraphics[height=1.2em]{/articles/kauffman-polynomial/skein_under.png}) = z (L(\includegraphics[height=1.2em]{/articles/kauffman-polynomial/skein_h.png}) + L(\includegraphics[height=1.2em]{/articles/kauffman-polynomial/skein_v.png})) - L(\includegraphics[height=1.2em]{/articles/kauffman-polynomial/skein_over.png})actually due to the symmetry of the Kauffman polynomial we can just use the same rule for both cases, so we can writeL_K(a,z) = z (L_{E_c K}(a,z) + L_{e_c K}(a,z)) - L_{S_c K}(a,z)where E_c K and e_c K have one less crossing and S_c K has the same crossings as K but is one step closer to the standard unlink.
Python Implementation
The approach has been a mix of bottom-up and top-down, we first wrote the code
for the main algorithm and then wrote the missing implementation for SGCode
writing many tests along the way. First we defined a couple of classed SGCode
and PDCode
to work with these codes and easily convert between each other.
SG Codes
We are now going to walk thorough the class that lets use work nicely with SG codes. The the classes we are going to use are all frozen data-classes to ensure immutability and enforce a more functional programming style.
class SGCodeCrossing:
id: int
over_under: typing.Literal[+1, -1]
handedness: typing.Literal[+1, -1]
def is_over(self) -> bool: ...
def is_under(self) -> bool: ...
def opposite(self) -> SGCodeCrossing: ...
def switch(self) -> SGCodeCrossing: ...
def flip_handedness(self) -> SGCodeCrossing: ...
class SGCode:
components: list[list[SGCodeCrossing]]
def relabel(self) -> SGCode: ...
def to_minimal(self) -> SGCode: ...
def writhe(self) -> int: ...
def crossings_count(self) -> int: ...
def reverse(self, ids: list[int] = None) -> SGCode: ...
def mirror(self) -> SGCode: ...
def overlies_decomposition(self) -> list[list[int]]: ...
def std_unknot_switching_sequence(self) -> list[int]: ...
def apply_switching_sequence(self, seq: list[int]) -> SGCode: ...
def first_switch_to_std_unknot(self) -> typing.Union[int, bool]: ...
def sublink(self, component_ids: list[int]) -> SGCode: ...
def get_crossing_handedness(self, id: int) -> typing.Literal[+1, -1]: ...
def get_crossing_indices(self, id: int) -> list[tuple[int, int, SGCodeCrossing]]: ...
def switch_crossing(self, id: int) -> SGCode: ...
def splice_h(self, id: int, orthogonal: typing.Literal[+1, -1] = +1) -> SGCode: ...
def splice_v(self, id: int) -> SGCode: ...
# static methods
@staticmethod
def from_tuples(components: list[list[tuple[int, int]]]) -> SGCode: ...
@staticmethod
def from_pd(pd_code: PDCode) -> SGCode: ...
Let’s now walk through the most important methods of this class.
Writhe
One of the first important things we need is to compute the writhe w(K) of a link, this can easily be done with signed gauss codes as its a list of tuples where the second entry is the crossing sign. Let L be an oriented link with components C_1, \dots, C_k each with crossings c_{i, j} with i = 1, \dots, k and j = 1, \dots, |C_i|.
Let’s notice that here each crossing appears twice, once as over-crossing and once as an under-crossing this is the reason for the 1/2 in the following formula. By \varepsilon(c) we refer to the sign (or handedness) of the crossing at c.
in code becomes the following
def writhe(self):
return sum(
c.handedness # => +1 or -1
for component in self.components
for c in component
) // 2
Standard Unknot
The next building block for computing the Kauffman polynomial is detecting and computing the standard unknot or unlink. Formally this is done by taking the planar shadow and a directed starting point on each component of the link. Then we can walk along the shadow of each component in order and make each crossing an over-crossing when passing on it on the first time.

On the other hand our algorithm directly works with switching sequences \lambda that bring a link L to its standard unlink \hat{L}. We wrote methods to directly compute and apply these switching sequences.
The std_unknot_switching_sequence
method just walks along each component in
its orientation marking what switches have to be made to bring that link to its
standard unknot.
def std_unknot_switching_sequence(self) -> list[int]:
visited_crossings: set[int] = set()
switched_crossings: list[int] = [] # resulting list of ids to switch
for component in self.components:
for crossing in component:
if crossing.id not in visited_crossings:
if crossing.is_under():
switched_crossings.append(crossing.id)
visited_crossings.add(crossing.id)
return switched_crossings
Now we need to be careful when applying these switches as they do not preserve the writhe as we can see in the following figure

First we wrote two methods, one called SGCodeCrossing.switch()
that creates a
new switched crossing from another crossing and then another one for SGCode
that does the switches for the two occurrences.
def switch(self):
return SGCodeCrossing(
self.id,
-self.over_under,
-self.handedness
)
def switch_crossing(self, id: int):
return SGCode([
[
crossing.switch()
if crossing.id == id else crossing
for crossing in component
]
for component in self.components
])
With this infrastructure in place now applying a switching sequence is just a matter of applying all switches in a given sequence, this can be done in a single pass with the following function.
def apply_switching_sequence(self, seq: list[int]) -> SGCode:
return SGCode([
[
crossing.switch()
if crossing.id in seq else crossing
for crossing in component
]
for component in self.components
])
In the actual Kauffman polynomial computation we just compute the first available switch in the switching sequence and apply just that one.
def first_switch_to_std_unknot(self) -> typing.Union[int, bool]:
visited_crossings: set[int] = set()
for component in self.components:
for crossing in component:
if crossing.id not in visited_crossings and crossing.is_under():
return crossing.id
visited_crossings.add(crossing.id)
return False
Crossing Splices
The splicing code is more involved due to the number of cases to analyze, let’s first see formally what we need to do.
We have all the following cases, first we can assume the entering over strand is in the top left corner of a diagram (this can be done by applying locally a small local isotopy). Then we have the following cases:
- Splice type: horizontal or vertical
- Crossing sign: left-handed or right-handed
- Crossing type: self-crossing or crossing between two different components
So we have a total of 2 \times 2 \times 2 = 8 cases to analyze. The following diagram shows all the possible cases for horizontal splicing for signed gauss codes.

Let’s explain this diagram a bit, each label is a part of the list for the component, the "-" sign tells is that part is walked in opposite order and must reversed in the final list.
The first two cases in the top left are the ones where the splice happens on a self-crossing that is the crossing is with two parts of the same strand. We can assume the starting point is before the over-strand, the result is the same as we can just rotate the list to get in this configuration. So if this component has n crossings (meaning 2n appearances in the list) and i and j are the indices of the over-strand crossing and the under-strand crossing respectively the code will be [ \cdots, [ C_1, \cdots, C_i, \cdots, C_j, \cdots, C_{2n} ], \cdots ] where C_k = (c_k, s_k) as a pair of crossing id and sign (over/under). To apply the splice we remove the crossings (c_i, s_i) and (c_j, s_j) from that component list and rejoin following the orientation of the strand starting from the starting point.
-
In the positive crossing case the horizontal splice splits the component in two, the first one composed of the first and third part one after the other and another one composed only of the second part. More precisely we have the following two new components [ C_1, \cdots, C_{i-1}, C_{j+1}, \cdots, C_{2n} ] [ C_{i+1}, \cdots, C_{j-1} ]
-
In the negative crossing case we first walk on the first part of the list, then we walk the second part in reverse and finally the third part. So the new component will be
[ C_1, \cdots, C_{i-1}, \underbrace{C_{j-1}, C_{j-2}, \cdots, C_{i+2}, C_{i+1}}_{\text{reversed part}}, C_{j+1}, \cdots, C_{2n} ]
Handling the reversed part is actually more involved than just reversing the list of crossings. We also need to correct all the signs of the crossings to account for the new orientation as showed below

To do this we need to flip the crossing sign (handedness) of all crossing ids
that occur in this part we are reversing. Notice this can end up even alter
signs of crossings in other components so we need to be careful. Let’s see for
example the code for the negative crossing horizontal splice case (the python
operator ^
is the symmetry difference of sets):
over_crossing_ids = set(c.id for c in l2 if c.is_over())
under_crossing_ids = set(c.id for c in l2 if c.is_under())
non_self_crossing_ids = over_crossing_ids ^ under_crossing_ids
def update_signs(strand: Iterable[SGCodeCrossing]):
return [
c.flip_handedness() if c.id in non_self_crossing_ids else c
for c in strand
]
return SGCode([
*(
update_signs(component)
for i, component in enumerate(self.components)
if i -= component_index
),
[
*update_signs(l1),
*update_signs(reversed(l2)),
*update_signs(l3),
],
])
The code for the vertical splices is omitted as the cases are the same as for the horizontal splices just flipped with respect to handedness. All the cases for vertical splices are shown below in the following diagram and we can see that the output lists are the same as in the previous splice case just switched based on the crossing sign.

So the final code is just a conversion of all this cases to list slicing and re-joining with the appropriate crossings removed and signs updated correctly.
Code for computing the Kauffman polynomial
The final code for computing the Kauffman polynomial is the following, the main
idea of the algorithm was already described previously. The code uses the
SGCode
class defined above to represent links. At the top level we define
globals variables for a
, z
using the sympy
library for working with
polynomials.
a, z = symbols("a z")
d = (a + 1 / a) / z - 1
def kauffman_polynomial(link: SGCode) -> Poly:
if len(link.components) == 0:
return 0
component_groups = link.overlies_decomposition()
if len(component_groups) == 1:
unknot_index = link.first_switch_to_std_unknot()
link_rev = link.reverse()
unknot_index_rev = link_rev.first_switch_to_std_unknot()
if unknot_index == False or unknot_index_rev == False:
return a ** link.writhe()
else:
link_switched = link.switch_crossing(unknot_index)
link_spliced_h = link_switched.splice_h(unknot_index)
link_spliced_v = link_switched.splice_v(unknot_index)
k_link_spliced_h = kauffman_polynomial(link_spliced_h)
k_link_spliced_v = kauffman_polynomial(link_spliced_v)
k_link_switched = kauffman_polynomial(link_switched)
return (
z * (k_link_spliced_h + k_link_spliced_v)
- k_link_switched
)
else:
result = 1
for k, component_ids in enumerate(component_groups):
new_link = link.sublink(component_ids)
if k > 0:
result *= d
result *= kauffman_polynomial(new_link)
return result
Given this we can also write a function for computing the normalized Kauffman polynomial F_K(a, z) = a^{-w(K)} L_K(a, z) as follows
def f_polynomial(link: SGCode) -> Poly:
writhe_val = link.writhe() # This must be an integer
# Ensure a is a sympy symbol for a**(-writhe_val)
norm_factor = a ** (-writhe_val)
return Poly(norm_factor * kauffman_polynomial(link), a, z, domain='ZZ').expand()
Debugging and Optimizations
The code (omitted from above, see the github repository for the full code) has actually been instrumented with some helper function to ease debugging and optimizations.
-
Use the
cache
python decorator to memoize all calls tokauffman_polynomial
. -
Another decorator called
log_input_output
helps with debug printing the traces like the following for the Hopf link● kauffman_polynomial([[(+1, +1), (-2, +1)], [(+2, +1), (-1, +1)]]) │ [i] not cached... │ [i] applying skein │ [i] splice h, lambda = [2, ...] │ ● kauffman_polynomial([[(+1, -1), (-1, -1)]]) │ │ [i] not cached... │ │ [i] standard unknot form │ └─▶ 1/a │ [i] splice v, lambda = [2, ...] │ ● kauffman_polynomial([[(+1, +1), (-1, +1)]]) │ │ [i] not cached... │ │ [i] standard unknot form │ └─▶ a │ [i] switch, lambda = [2, ...] │ ● kauffman_polynomial([[(+1, +1), (+2, -1)], [(-1, +1), (-2, -1)]]) │ │ [i] not cached... │ │ [i] split link: [[0], [1]] │ │ ● kauffman_polynomial([[]]) │ │ │ [i] not cached... │ │ │ [i] standard unknot form │ │ └─▶ 1 │ │ ● kauffman_polynomial([[]]) │ │ └─▶ 1 │ └─▶ a/z - 1 + 1/(a * z) └─▶ a * z - a/z + 1 + z/a - 1/(a * z)
-
Finally there is another decorator called
polynomial_wrapper
that help with applying optimizations to function calls and is defined as followsOptimizationType = Literal['expand', 'relabel', 'to_minimal'] def polynomial_wrapper(optimizations: set[OptimizationType] = {'expand'}): def decorator(func: Callable[[SGCode], Poly]) -> Callable[[SGCode], Poly]: @functools.wraps(func) def wrapper(link: SGCode) -> Poly: # First we convert to minimal rotated form and only then we relabel, # this ensures a consistent indexing for the cache. if 'to_minimal' in optimizations: link = link.to_minimal() if 'relabel' in optimizations: link = link.relabel() result = func(link) if 'expand' in optimizations: result = sympy.expand(result) return result return wrapper return decorator
This applies
sympy.expand(...)
to keep the resulting polynomial simplified and prepends each call with the two following optimizations-
to_minimal
: This “rotates” the list of each component in the SG code, brining each list in minimal lexicographical order. -
relabel
: After minimal rotation we apply a relabelling to increase the chances of hitting the cache decorator.
These two optimization decrease the number of calls by almost an order of magnitude, for example for the case of knot
12n_888
we have the following optimizations lattice, the number of total recursive calls for the functionkauffman_polynomial
is in blue and time of execution in green.Optimizations Lattice Calls As we can see the most important optimization is the
to_minimal
one that by its own reduces the total number of calls from 29K to 8K but relabelling is still able to help. -
All optimizations are enabled by default in the final program and debugging traces are disabled to not impact the speed.
Experiments
The main achievement of this project was to check all (normalized) Kauffman
polynomials present in the
KnotInfo database. Using the python
package database_knotinfo
we loaded all the data and wrote a script called
./check_knotinfo.py
with the following options
usage: check_knotinfo.py [-h] [--knots] [--links] [-c COUNT]
Calculate the Kauffman polynomial of a knot or link.
options:
-h, --help show this help message and exit
--knots Include knots from database
--links Include links from database
-c, --count COUNT Number of knots to test per database
This uses the multiprocessing
library to parallelize the computation of the
Kauffman polynomial for each call of the Kauffman polynomial and checks all the
databases of knots and links in about 15 minutes on 12-core cpu.
We run this on all knots and links in the database and found the following results:
- For links: All 4187 include the Kauffman polynomial and are all correctly computed by our algorithm.
- For knots: Out of 12965 knots only 2977 knots include the Kauffman polynomial (they are computed only up to 12 crossing even if there are up to 13 crossings knots in the database) and we found a mismatch in the computation of the polynomial of 10_{125}.
The knot 10_{125}


This knot is chiral meaning that it is not equivalent to its mirror. Our algorithm computes the following polynomial for this knot
while the polynomial for the knot present in KnotInfo is the following
Then we noticed that they are related by the substitution (a \mapsto 1/a, z \mapsto z). The Kauffman polynomial has this property that inverts the a when computing the mirror image of a knot, in this sense it is able to distinguish chiral variants of knots.
So, we checked using the implementation of the Kauffman polynomial using one in
the KnotTheory
Mathematica package: using the PD code from the KnotInfo
database, both Mathematica and our algorithm give the same result. This makes
us believe that there is a mismatch in KnotInfo between the PD code and Kauffman
polynomial columns, meaning they are not correctly synchronized.
Performance Analysis
We also checked the performance of our algorithm on all knots and links in the database up to 12 crossings (these are the ones with the Kauffman polynomial in the database). The results are shown in the following histograms, each bar counts the number of knots that took the amount of time in the relative bin.


We can see that the times are just about of a couple of seconds per knot or link. This could be improved by using more sophisticated caching techniques but we did not implement this as the performance was already good enough for our purposes.
Conclusion
In this project we implemented the Kauffman polynomial from scratch in Python using signed Gauss codes. We compared the results of our algorithm with the ones present in the KnotInfo database and found an error in the computation of the polynomial of the knot 10_{125}. We believe that the error is due to a mismatch between the PD code stored in the database with the corresponding Kauffman polynomial.
This was a very fun project I did for university and with some real outcomes for once. I have some more ideas about this and I will probably write at least one more post about this and some ideas I want to develop further.