Chapter 4
Pattern Matching in large Graph Transformation Systems
To our knowledge, the first practical proposal for a GTS-based quantum compiler was presented in Xu, 2022. 2022. Quartz: Superoptimization of Quantum Circuits. In Proceedings of the 43rd ACM SIGPLAN International Conference on Programming Language Design and Implementation, June 2022. Association for Computing Machinery, 625--640. doi: 10.1145/3519939.3523433 and then refined in Xu, 2023. 2023. Synthesizing Quantum-Circuit Optimizers. Proceedings of the ACM on Programming Languages 7, PLDI (June 2023, 835--859). doi: 10.1145/3591254. In these, the set of possible graph transformations is obtained by exhaustive enumeration. Using SAT solvers and fingerprinting techniques, the set of all small programs up to a certain size can be generated ahead of time and clustered into disjoint partitions of equivalent programs. This concisely expresses every possible peephole optimisation up to the specified size: for every small enough subset of operations of an input program, its equivalence class can be determined. Any replacement of that set of operations with another program in the same equivalence class is a valid transformation and, thus, a potential peephole optimisation. Transformation systems on minIR graphs based on equivalence classes were formalised in section 3.4.
First results of this approach are promising. Xu, 2022. 2022. Quartz: Superoptimization of Quantum Circuits. In Proceedings of the 43rd ACM SIGPLAN International Conference on Programming Language Design and Implementation, June 2022. Association for Computing Machinery, 625--640. doi: 10.1145/3519939.3523433 demonstrated that optimisation performance improves markedly with larger sets of transformation rules. Such workloads however rely heavily on pattern matching, the computational task that identifies subgraphs on which transformation rules apply. In Xu, 2022. 2022. Quartz: Superoptimization of Quantum Circuits. In Proceedings of the 43rd ACM SIGPLAN International Conference on Programming Language Design and Implementation, June 2022. Association for Computing Machinery, 625--640. doi: 10.1145/3519939.3523433 and Xu, 2023. 2023. Synthesizing Quantum-Circuit Optimizers. Proceedings of the ACM on Programming Languages 7, PLDI (June 2023, 835--859). doi: 10.1145/3591254, pattern matching is carried out separately for each pattern. This becomes a significant bottleneck for large rule sets. In Xu, 2022. 2022. Quartz: Superoptimization of Quantum Circuits. In Proceedings of the 43rd ACM SIGPLAN International Conference on Programming Language Design and Implementation, June 2022. Association for Computing Machinery, 625--640. doi: 10.1145/3519939.3523433, performance peaks at around 50,000 transformation rules, after which the additional overhead from pattern matching becomes dominant, deteriorating the compilation results.
In this chapter, we solve these scaling difficulties by presenting an algorithm for pattern matching on minIR graphs that uses a pre-computed data structure to return all pattern matches in a single query. The set of transformation rules is directly encoded in this data structure. After a one-time cost for construction, pattern-matching queries can be answered in running time independent of the number of rules in the transformation system.
The asymptotic complexity results presented in this chapter depend on some simplifying assumptions on the properties that the pattern graphs and embeddings must satisfy. This represents a restriction on the generality of minIR graphs, but we do not find that they restrict the usefulness of the result in practice. As discussed in section 4.7, none of these assumptions are required in practice for the implementation. We have not observed any impact on performance when the imposed constraints are lifted, so we conjecture that at least some of these assumptions can be relaxed and our results generalised.
After a discussion of related work in section 4.1, Section 4.2 presents the assumptions that we are making in detail, along with some relevant definitions for the rest of the chapter. Sections 4.3, 4.4, and 4.5 present the core ideas of our approach, respectively introducing: a reduction of minIR graphs to equivalent trees, a canonical construction for the tree reduction and an efficient way to enumerate all possible subtrees of a graph. We also prove bounds on the size and number of the resulting trees.
In section 4.6, we introduce a pre-computation step and show that the pattern-matching problem reduced to tree structures can be solved using a prefix tree-like automaton that is fixed and pre-computed for a given set of patterns. Combining the automaton construction with bounds from section 4.5 leads to the final result. We conclude in section 4.7 with benchmarks on a real-world dataset of 10000 quantum circuits, obtaining a 20x speedup over a leading C++ implementation of pattern matching for quantum circuits.
4.1. Related work
Our proposed solution can be seen as a specialisation of RETE networks Forgy, 1982. 1982. Rete: A fast algorithm for the many pattern/many object pattern match problem. Artificial Intelligence 19, 1 (Septempter 1982, 17--37). doi: 10.1016/0004-3702(82)90020-0 Varró, 2013. 2013. A Rete Network Construction Algorithm for Incremental Pattern Matching and derivatives Ian, 2003. 2003. The execution kernel of RC++: RETE*, a faster RETE with TREAT as a special case. International Journal of Intelligent Games and Simulation 2, 1 (Feb 2003, 36-48) Armstr., 2014. 2014. Memory Efficient Stream Reasoning onResource-Limited Devices Mirank., 1987. 1987. TREAT: a new and efficient match algorithm for AI production systems to the case of graph pattern matching. The additional structure obtained from restricting our considerations to graphs results in a simplified network design that allows us to derive worst-case asymptotic runtime and space bounds that are polynomial in the parameters relevant to our use case1 – overcoming a key limitation of RETE.
Another well-studied application of large-scale pattern matching is in the context of stochastic biomolecular simulations Sneddon, 2010. 2010. Efficient modeling, simulation and coarse-graining of biological complexity with NFsim. Nature Methods 8, 2 (December 2010, 177--183). doi: 10.1038/nmeth.1546 Bachman, 2011. 2011. New approaches to modeling complex biochemistry. Nature Methods 8, 2 (January 2011, 130--131). doi: 10.1038/nmeth0211-130, particularly the Kappa project Danos, 2004. 2004. Formal molecular biology. Theoretical Computer Science 325, 1 (Septempter 2004, 69--110). doi: 10.1016/j.tcs.2004.03.065. Stochastic simulations depend on performing many rounds of fast pattern-matching for continuous Monte Carlo simulations Yang, 2008. 2008. Kinetic Monte Carlo method for rule-based modeling of biochemical networks. Physical Review E 78, 3 (Septempter 2008, 031910). doi: 10.1103/physreve.78.031910. However, unlike our use case, the procedure typically does not need to scale well to a large number of patterns. In Danos, 2007. 2007. Scalable Simulation of Cellular Signaling Networks, Danos et al. introduced a pre-computation step to accelerate matching by establishing relations between patterns that activate or inhibit further patterns. This idea was later expanded upon and formalised in categorical language in Boutil., 2017. 2017. Incremental Update for Graph Rewriting. The ideas presented in Boutil., 2017. 2017. Incremental Update for Graph Rewriting are similar to ours; their formalism has the advantage of being more general but does not present any asymptotic complexity bounds and suffers from similar worst-case complexities as RETE.
A similar problem has also been studied in the context of multiple-query optimisation for database queries Sellis, 1988. 1988. Multiple-query optimization. ACM Transactions on Database Systems 13, 1 (March 1988, 23–52). doi: 10.1145/42201.42203 Ren, 2016. 2016. Multi-Query Optimization for Subgraph Isomorphism Search. Proceedings of the VLDB Endowment 10, 3 (November 2016, 121–132). doi: 10.14778/3021924.3021929, but it has limited itself to developing caching strategies and search heuristics for specific use cases. Finally, using a pre-compiled data structure for pattern matching was already proposed in Messmer, 1999. 1999. A decision tree approach to graph and subgraph isomorphism detection. Pattern Recognition 32, 12 (December 1999, 1979--1998). doi: 10.1016/S0031-3203(98)90142-X. However, with a space complexity – is the input size and the pattern size – it does not scale to large input graphs, even for small patterns.
RETE networks have been shown to have exponential worst-case space (and thus time) complexity Rakib, 2018. 2018. An Efficient Rule-Based Distributed Reasoning Framework for Resource-bounded Systems. Mobile Networks and Applications 24, 1 (October 2018, 82--99). doi: 10.1007/s11036-018-1140-x, although performance in practical use cases can vary widely Uddin, 2016. 2016. Resource-Bounded Context-Aware Applications: A Survey and Early Experiment. ↩︎
4.2. Preliminaries and simplifying assumptions
For simplicity, we will throughout consider minIR graphs that admit a type system , though most of the results can also be adapted to other graph domains. We will write for the types of (i.e. its values) and for the operation types (i.e. its edges).
Linear paths and operation splitting #
An operation type in the type system is a hyperedge. Its endpoints
are strings of data types that define the input and output signature of the operation . We can refer to the set of all hyperedge endpoints of using the string indices ( denotes the disjoint set union):
Fix a partition of into disjoint pairs
where the last set of the partition may be a singleton if is odd. For every , we then define new split operation types , each with two endpoints: the -th operation type has endpoints and in . For every operation of type , we can then split into operations each of arity 1 or 2 and of types respectively. We will refer to the graph transformation that replaces an operation in a minIR graph with the operations for as operation splitting.
It is important to note that the splitting of an operation is unique and given by the type of , and thus invariant under (typed) morphisms: there is a morphism of a pattern into a graph if and only if there is a morphism from the split pattern into the split graph .
A transformation rule splitting an operation with 3 sources and 2 targets. The choice of endpoint partition made here, obtained by pairing the -th use with the -th define, is arbitrary but convenient for quantum gates as they correspond to the input and output values of a same qubit.
The endpoint partitions also define linear paths. Two values in a minIR graph are on the same linear path if there are values with and such that is connected to through an operation and they correspond to the same pair of endpoints in the endpoints partition (i.e. the indices of correspond to values and in ).
Linearity assumption and rigidity #
Recall that in Definition ., and refer to the subset of values that are within the domain of definition of and respectively. For this chapter, we will assume ; in other words, minIR graphs are IO-free (this is w.l.o.g., see discussion after Proposition .) and all values are linear1 (this is definitely not w.l.o.g.!).
As a result of this assumption, the subcategory of minIR graphs that we consider forms a rigid category, as introduced by Danos et al. Danos, 2014. 2014. Transformation and Refinement of Rigid Structures. The definition, which we reproduce here, is given in terms of morphisms that intersect all components of the codomain. We refer to Danos, 2014. 2014. Transformation and Refinement of Rigid Structures for the precise definition of that notion: in the context of linear-valued minIR graphs, this is equivalent to requiring that the image of the graph homomorphism intersects every connected component of the codomain.
A category is rigid if for all morphisms in that intersects all components of and for all that factorises as , then is unique.
In other words, there is a unique way to extend a morphism to a morphism , if it exists. If we interpret and as graph patterns that we are interested in matching with , then rigidity guarantees that there is (at most) a unique way to extend a match morphism into a match morphism on the larger pattern .
The linearity assumption also has other useful consequences. Every linear value has exactly one use and one definition. As a result, all linear paths are disjoint and form a partition of the values of the graph. They correspond to the paths that form the connected components of the fully split graph, i.e. the graph obtained by splitting every operation. We call the number of linear paths (and hence the number of connected components in the fully split graph) the circuit width, written . We also use the linear path decomposition to define circuit depth, written , as the longest linear path in .
As discussed in section 3.4, minIR rewrites are instantiated from transformation rules by minIR match morphisms . Restricting our considerations to linear-valued minIR graphs has the further implication that all such match morphsisms will be injective. We call an embedding and write it using greek letters and a hooked arrow
Finding such embeddings is the pattern matching problem that we are solving. This problem is equivalent to finding minIR subgraphs of such that is isomorphic to the pattern .
Convexity #
According to Proposition ., a necessary condition for a subgraph to define a valid minIR rewrite is convexity. In this chapter we weaken this requirement and propose a condition based on circuit width:
Let be an embedding of a pattern into a linear-valued minIR graph such that is a convex subgraph of . Then for every subgraph such that , it holds that
Proof
Up to isomorphism, we can assume . Suppose there is such that and . Let be partitions of and into sets of values that are on the same linear path of and respectively. It must hold that for all there is such that , because and operation splitting is preserved under embeddings. As the map from to cannot be injective, there must be and , such that and . We conclude that there must be a path in the fully split graph of between a value of and a value of that is not in the fully split graph of . Given that is convex, this path must be in , which contradicts the preservation of operation splitting under embeddings.
In this chapter, whenever we define a subgraph of a graph , we will assume that satisfies the above weakened convexity condition.
The converse of Proposition ., however, is not true. The pattern-matching technique presented below will find a strict superset of convex embeddings. To restrict considerations to convex embeddings, it suffices to filter out the non-convex ones in a post-processing step.
Ignoring minIR Hierarchy #
So far, we have omitted discussing one part of the minIR structure: the nested hierarchy of operations. Syntactically, the hierarchy formed by relations between minIR operations can be viewed as just another value type that operations are incident to: parent operations define an additional output that children operations consume as additional input. Because of the bijectivity requirement of minIR morphisms on parent-child relations of Definition ., these parent-child relations behave, in fact, like linear values – and hence do not violate the linearity assumption we have imposed.
However, by treating them as such, we have further weakened the constraints on pattern embeddings. We do not enforce that boundary values must be in the same regions or that parent-child relations cannot be boundary values. Similarly to convexity, we defer checking these properties to a post-processing step.
Further assumptions (harmless) #
We will further simplify the problem by making presentation choices that do not imply any loss of generality. First of all, we assume that all patterns have the same width and depth , are connected graphs and have at least 2 operations. These conditions can always be fulfilled by adding “dummy” operations if necessary. Embeddings of disconnected patterns can be computed one connected component at a time.
We will further assume that all operations are on at most two linear paths (and thus in particular, have at most 4 endpoints). Operations on linear paths can always be broken up into a composition of operations, each on two linear paths as follows:
Expressing an operation on linear paths as a composition of two operations on 2 linear paths.
This transformation leaves circuit width unchanged but may multiply the graph depth by up to a factor .
We furthermore define the set of all port labels
so that we can associate every operation endpoint in a minIR graph with a port label from the set . We further endow the labels with a total order (for instance, based on the string index values). The total order on then induces a total order on the paths in that start in the same value : the paths are equivalently described by the sequence of port labels of the operations traversed. These form strings in , which we order lexicographically. Given a root value , for every value in there is thus a unique smallest path from to in 2. This path is invariant under isomorphism of the underlying graph (i.e. relabelling of the values and operations but preserving the port labels). With this we conclude the discussions of the specificities of minIR graphs related to typing, linearity and hierarchy, and the related assumptions that we are making.
To summarise, minIR graphs as they are considered in this chapter are hypergraphs (Definition .) that satisfy the following properties
- every vertex (value) is incident to exactly two hyperedges (operations). It is the target of one hyperedge (its definition) and the source of another one (its use),
- every hyperedge is incident to at most four vertices,
- every hyperedge can be split in a unique way (and invariant under isomorphism) into at most two split operations, with each at most two endpoints.
When modelling subgraphs of IO-free minIR graphs (typically patterns for pattern matching), some hyperedge connections at the boundary of the subgraph will be missing. We say a value is open if a use or define operation is missing (i.e. it is a boundary value in a minIR subgraph).
We will simplify refer to hypergraphs that satisfy the above assumptions as graphs. In the unique instance of this chapter where a graph that does not satisfy this construction is referred to, we will specifically call it a simple graph.
We conclude with the following notable bound on circuit width.
Let be a graph with operations of odd arity (i.e. is odd) and open values. Then, the circuit width of is
Proof
For any linear path in consider its two ends and , i.e. the two values in with only one neighbouring value in (by definition linear paths cannot be empty). In the fully split graph of , these values are either open or must be connected to two operations. In the latter case, at least one of the operations must have a single endpoint (otherwise by acyclicity, the operation would have two neighbours).
In a fully split graph, operations with a single endpoint result from a split operation with an odd number of endpoints. We conclude that for every linear path, there are either two operations with an odd number of endpoints in , or one such operation and one open value, or two open values. The result follows.
This restriction is necessary for our results: copyable values may admit an arbitrary number of adjacent hyperedges. As a result, minIR graph pattern matching with copyable values is a generalisation of the subgraph isomorphism problem, a well-known NP-complete problem Cook, 1971. 1971. The complexity of theorem-proving procedures. In Proceedings of the third annual ACM symposium on Theory of computing - STOC ’71. ACM Press, 151--158. doi: 10.1145/800157.805047. The approach generalises to non-linear types, but the complexity analysis no longer holds (we pay a computational price for every non-linear value matched). ↩︎
Remark that the ordering of the operations thus defined is a particular case of a depth-first search (DFS) ordering of the graph: given an operation that has been visited, all its descendants will be visited before proceeding to any other operation. ↩︎
4.3. Tree reduction
We reduce the problem of graph pattern matching to matching on rooted trees – as we will see in section 4.6, a much simpler problem to solve. The map between graphs and (rooted) trees is given by rooted dual trees. Call tree-like if is connected and the underlying undirected graph of is acyclic.
Let be a tree-like graph with operations . Then given a root operation , the rooted dual tree of rooted at , written is the tree given by
- the nodes of the tree are the operations of ,
- the parent and children of are the operations that share a value with in ; the parent is the unique operation on the path from to ,
- the children of an operation are ordered according to the port labels.
Unlike graphs, tree nodes are identified uniquely by their path from the root. Trees isomorphic as graphs with identical root are thus considered equal.
A tree reduction using path splitting #
To reduce a graph to a tree using the rooted dual tree construction, it suffices to reduce to a tree-like graph. The following result shows that this can always be achieved by repeatedly applying operation splitting transformations.
A tree-like graph can be obtained from any connected graph by applying operation splittings. The resulting graph is a path-split graph (PSG) of .
Proof
Consider the undirected simple graph , where vertices are linear paths, and there is an edge between two linear paths for every operation that belongs to both paths. We call the interface graph of .
Splitting an operation in a graph corresponds to removing the corresponding edge in . On the other hand, the underlying undirected graph of has a cycle if and only if there is a cycle in . Indeed, a cycle in cannot belong to a single linear path in , by acyclicity of minIR graphs. There is, therefore, a cycle of operations that span multiple linear paths, thus forming a cycle in .
Hence, the operations to be split to turn into a tree-like graph are given by the set of edges in that must be removed to obtain a spanning tree of 1
As we consider typed graph, the splitting of an operation is unique; however the choice of spanning tree of is not unique, and thus multiple PSGs exist for a given graph .
If is a PSG of some graph , then call an operation of an anchor operation if it is on two linear paths and it is not split in . The set of all anchors operations fully determines the path-split graph. We write for the PSG of obtained by anchors .
Proof
We have assumed in section 4.2 that every operation in is on at most two linear paths and thus can be connected to at most four values. Each value is linear and hence connected to at most one other operation. It results that every operation in has at most four neighbouring operations – one parent and three children. A tree leaf can be chosen as the root operation to ensure the root does not have four children.
We can make the path splitting transformation reversible by separately storing the set of split operations in that correspond to a single operation in . As every operation of can get split in at most two split operations, we can store the pairs of split operations in that correspond to an operation in in a partial map that defines weights for (a subset of) the operations of :
This maps a split operation to the unique undirected path in from to the other half of the split operation.
This defines a map , the inverse of the path splitting transformation .
Contracted path-split graphs #
We can further simplify the structure of the data of a PSG by contracting all operations of that are on a single linear path. The result is the contracted path-split graph (cPSG) of , written .
We employ a similar trick as above to make this transformation reversible, this time by introducing weights on the values of that store the string of operations that were contracted2
where are the values of and are the optypes of operations in , i.e. the optypes of the minIR graph along with the optypes of the split operations. This defines a second map that is the inverse of the path-split graph contraction transformation . In summary, we have the composition
Contracted PSGs are particularly useful for the study of the asymptotic complexity of the pattern matching algorithm we propose, as they have a very regular structure. This is expressed by the following proposition that further extends the statement of Proposition 4.3:
Proof
That the tree is ternary follows from Proposition 4.3. Every node of the tree corresponds to an operation in , which is on exactly two linear paths. As a result of acyclicity of the tree, a tree of nodes spans linear paths – and hence, we conclude .
We conclude the construction presented in this section with the following result, expressing graph pattern matching in terms of tree equality:
Let be a pattern graph and a graph. Let be a PSG of . There is an embedding if and only if there is and a PSG of such that
and the trees have equal weight maps and .
The proof of this follows directly from our construction, the unicity of trees under isomorphism and the bijection between the graphs and their cPSGs.
We have thus successfully reduced the problem of pattern matching to the problem of matching on trees. Given that the ordering of children of a node in a tree is fixed, checking trees for equality is a simple matter of checking node and weight equality, one node (and edge) at a time.
We conclude this section with a figure summarising the constructions we have presented.
A graph , along with the path-split graph , the contracted path-split graph and their rooted dual trees. The anchor operations are (grey) and (red). The root of the rooted dual trees is .
4.4. Canonicalising the tree reduction
The reduction of graph matching to ternary trees from the previous section is a big step towards an algorithm for graph matching. However, Proposition 4.5 is expressed in terms of existence of PSGs – it is as yet unclear how the trees can be constructed. This is the purpose of this section.
We introduce for this purpose a canonical, that is, invariant under isomorphism, choice of PSG of . The result is a unique canonical transformation from to a cPSG that we can use for pattern matching.
We proceed by using the total order that we have defined on port labels and can be extended lexicographically to paths outgoing from a shared root operation (see section 4.2 for more details). Whenever more than one path from to exist in , it suffices to consider the smallest one. For a choice of root operation we thus obtain a total order of all operations in .
We then restrict our attention to operations on two linear paths and consider them in order. We keep track of linear paths that have been visited and proceed as follows to determine whether must be split:
- if is on a linear path that was not seen before, it is left unchanged and the set of visited linear paths is updated;
- otherwise, i.e. is on two linear paths that have already been visited, the operation is split, resulting in two operations on a single linear path.
The pseudocode CanonicalPathSplit implements this algorithm. We use
Operations(G) to retrieve all the operations on the graph G and
LinearPaths(G, op) to retrieve the linear paths of the operation op. The
linear paths are identified using integer indices that can be pre-computed and
stored in linear time in the graph size. SplitOperation(G, op) returns the
graph resulting from splitting op into two operations on a single linear path.
Finally, PathAsPortLabels(G, root, v) returns the string of the port labels
that encode the path from root to v in the graph G. The strings are
ordered lexicographically. The non-capitalized functions set, union,
sort1, len, and issubset have their standard meanings.
1def CanonicalPathSplit(G: Graph, root: Operation) -> Graph:
2 new_G := G
3 all_operations := Operations(G)
4 sorted_operations := sort(
5 all_operations,
6 sort_key= lambda v: PathAsPortLabels(G, root, v)
7 )
8
9 # keep track of the visited linear paths
10 seen_paths := set()
11 for op in sorted_operations:
12 # Get the (pre-computed) indices of the linear paths
13 op_linear_paths := LinearPaths(G, op)
14 if len(op_linear_paths) == 2:
15 if issubset(op_linear_paths, seen_paths):
16 # The two linear paths of `op` are already visited
17 new_G = SplitOperation(new_G, op)
18 else:
19 # Mark the new linear paths as visited
20 seen_paths = union(seen_paths, op_linear_paths)
21 return new_G
The following figure shows an example of splitting a graph into its canonical
PSG using CanonicalPathSplit.
Splitting a graph into its canonical PSG. Ports are ordered counter-clockwise on each edge, and numbered according to the lexicographic order of the paths from root to the port, as returned by PathAsPortLabels. This induces an order on the hyperedges, reflected in the alphabetic order of the edge labels. Linear paths are formed by ports in a horizontal line (as marked by the dotted lines). Vertex root is chosen as the root of the canoncal splitting. Vertices d and g are not split because they are the smallest edges that contain the fourth, respectively first linear path.
CanonicalPathSplitCanonicalPathSplit(G) is a valid PSG of
It is deterministic and invariant under isomorphism of . The runtime of
CanonicalPathSplit is , where is the number of operations in the
graph .Proof
Let be the graph returned by CanonicalPathSplit(G). From the
discussion in the proof of Proposition 4.6, we know
it is sufficient to show that the interaction graph of is
acyclic and connected.
is acyclic. If there was a cycle in , then there
would be operations in that pairwise
share a linear path. One of these operations must be
considered last in the for loop of lines 11–20, suppose it is . But
every linear path of is either also a linear path of or a
linear path of : thus does not satisfy the condition on line
15, and thus cannot be in , a contradiction. Hence is
acyclic.
is connected. We proceed inductively to show the following
invariant for the main for loop (lines 11–20): for all linear paths in
seen_paths, there is a path in to a linear path of the root
operation. seen_paths is only modified on line 20. If op is the root
operation, then trivially there is a path from the linear paths
op_linear_paths to a linear path of the root operation. Otherwise, we claim
that there must be one of the paths in op_linear_paths that is already in
seen_paths. From there it follows that there is a path in from
the root path to the unseen linear path, given by the path to the linear path in
seen_path followed by the edge in that corresponds to op.
By connectedness of , there is a path from the root operation to op. The
path is not empty because op is not the root operation, so we can consider the
prefix of the path of all operations excluding op. Call op' the last
operation preceding op and op_linear_paths' its linear paths. Two successive
operations on a path must share a linear path: op_linear_paths
op_linear_paths' cannot be empty. According to line 4, op' must have been
visited before op, thus op_linear_paths' seen_paths. It
follows that at least one element of op_linear_paths must be in seen_paths.
Determinstic and isomorphism invariant. The pseudocode above is deterministic and only depends on paths in encoded as strings of port labels, which are invariant under isomorphism.
Runtime complexity. Lines 2 and 3 run in time. With the exception of
the sort function on lines 4–7, every other line can be run in time:
- lines 13 and 15 run in constant time because the size of
op_linear_pathsis always at most 2; - line 20 (and the
incheck on line 15) can be run in constant time by representing theseen_pathsset as a fixed-size boolean array of size , with the -th bit indicating whether the -th linear path has been seen; - line 17 is a constant time transformation if we allow in-place modification of
new_G.
The for loop will run iterations, for a total of runtime.
Finally, the sorting operation would naively take time .
However, given that the ordering is obtained lexicographically from the paths
starting at the root, we can obtain the sorted list of operations by depth-first
traversal of the graph starting at the root. The result follows.
Using CanonicalPathSplit, we can now sketch what the pattern matching
algorithm should look like. For each pattern, we first compute their canonical
PSG for an arbirary choice of pattern root operation; then, given a graph ,
we can find all embeddings of patterns into by iterating over all possible
PSGs within . Naively, this involves enumerating all posible subgraphs of
, and then for each of them, iterating over all possible root choices.
This can be significantly sped up by realising that many of the PSGs that are
computed when iterating over all possible subgraphs and root choices are
redundant2. We will see in the next section that we can i)
iterate once over all possible root choices in and ii) introduce a new
procedure AllPathSplits that will efficiently enumerate all possible rooted
ual trees of PSGs that are rooted in for subgraphs within . In the
process, we will also see that we can replace the tree equality check of line 12
with a subtree inclusion check, further reducing the number of PSGs that must be
considered.
Naive pattern matching.
1# Precompute all PSGs
2allT = [CanonicalPathSplit(
3 P, root_P
4) for (P, root_P) in patterns]
5
6for S in Subgraphs(G):
7 for root_S in Operations(S):
8 TG = CanonicalPathSplit(
9 S, root_S
10 )
11 for T in allT:
12 if T == TG:
13 yield T
Improved using AllPathSplits (section 4.5).
1# Precompute all PSGs
2allT = [CanonicalPathSplit(
3 P, root_P
4) for (P, root_P) in patterns]
5
6for root_G in Operations(G):
7 for TG in AllPathSplits(
8 G, root_G
9 ):
10 for T in allT
11 # Replace == with subtree
12 if IsSubTree(T, TG)
13 yield T
4.5. Enumerating all path-split graphs
The CanonicalPathSplit procedure in the previous section defines for all
graphs and choice of root operation a canonical PSG , and thus a
canonical set of anchors that we write as
Instead of CanonicalPathSplit, we can equivalently consider a
CanonicalAnchors procedure, which computes directly instead of the
graph .
We formulate this computation below, using recursion instead of a for loop.
This form generalises better to the AllAnchors procedure that we will
introduce next.
The equivalence of the CanonicalAnchors procedure with CanonicalPathSplit
follows from the observation made in
section 4.2 that ordering operations in
lexicographic order of the port labels is equivalent to a depth-first traversal
of the graph.
CanonicalAnchors implements a recursive depth-first traversal (DFS), with the
twist that the recursion is explicit only on the anchor nodes and otherwise
relying on the lexicographic ordering just like in CanonicalPathSplit: lines
5–15 of CanonicalAnchors correspond to the iterations of the for loop (line
11–20) of CanonicalPathSplit until an anchor operation is found (i.e. the
else branch on lines 18–20 is executed). From there, the graph traversal
proceeds recursively.
We introduce the ConnectedComponent, Neighbours and RemoveOperation
procedures; the first returns the connected component of the current operation,
whereas the other two procedures are used to traverse, respectively modify, the
graph . Importantly, Neighbours(G, op) returns the neighbours of op
ordered by port label order.
To ensure that the recursive DFS does not visit the same operation twice, we
modify the graph with RemoveOperation on lines 11 and 15, ensuring that no
visited operation remains in G. As a consequence, CanonicalAnchors may be
called on disconnected graphs, which explains why an additional call to
ConnectedComponent (line 4) is required.
CanonicalPathSplit and CanonicalAnchorsLet be a connected graph and let be a root operation in . Then
CanonicalAnchors maps the graph to the canonical anchor set:
where is the set of all paths in and designates the empty graph.
The proof follows directly from the previous paragraphs.
1def CanonicalAnchors(
2 G: Graph, root: Operation, seen_paths: Set[int]
3) -> (Set[Operation], Set[int], Graph):
4 operations = Operations(ConnectedComponent(G, root))
5 # sort by PathAsPortLabels, as previously
6 sorted_operations := sort(operations)
7 operations_queue := queue(sorted_operations)
8
9 # Skip all operations that are not anchors
10 op := operations_queue.pop() # never emtpy, contains root
11 G = RemoveOperation(G, op)
12 while len(LinearPaths(G, op)) == 1 or
13 issubset(LinearPaths(G, op), seen_paths):
14 op = operations_queue.pop() or return ({}, {}, G)
15 G = RemoveOperation(G, op)
16
17 # op is anchor, update seen_paths and recurse
18 seen_paths = union(seen_paths, LinearPaths(G, op))
19 anchors := [op]
20 # sort by port labels
21 for child in Neighbours(G, op):
22 (new_anchors, seen_paths, G) = CanonicalAnchors(
23 G, child, seen_paths
24 )
25 anchors += new_anchors
26
27 return (anchors, seen_paths, G)
Maximal PSGs #
In addition to “simplifying” the data required to define path splitting, the definition of PSGs using anchor operations has another advantage that is fundamental to the pattern matching algorithm.
Consider the rooted dual tree of a PSG with root operation in . Recall that tree nodes are uniquely identified by their path from the root and thus are considered equal if they are isomorphic as graphs. We can in the same way define a tree inclusion relation on rooted dual trees that corresponds to checking that the trees have the same root and that the left-hand side is isomorphic to a subtree of the right-hand side. We also require that the operation weights given by the map map coincide on the common subtree.
Let be a connected graph, a set of operations in and a root operation. Consider the set
There is a subgraph such that for all subgraphs : . Furthermore, for all graph , there is and such that
We call the maximal PSG with anchors in .
The proof gives an explicit construction for .
Proof
Assume , otherwise the statement is trivial.
Construction of . Let be the set of linear paths in that go through at least one operation in . Consider the set of operations in given by the operations whose linear paths are contained in . This defines a subgraph of . Since , there exists . By assumption, is connected, and thus the anchors of are connected in . There is therefore a connected component that contains the set .
Well-definedness of . Consider the PSG of . We must show that is a tree-like graph for the proposition statement to be well-defined. In other words, we must show that the interaction graph of is acyclic and connected. is connected by construction, which implies connectedness of and thus of . It is acyclic because and has exactly operations on more than one linear path. is a thus a tree.
. For any subgraph , its operations must be contained in . Since any is connected and contains , it must further hold that .
We can now prove the equivalence of (1).
: If , then there exists with rooted dual tree
Furthermore, by definition of on rooted trees, a map is defined on , given by the map of on the domain . Recall from section 4.3 that there is a map that maps
: Since , we know from point 1 that . Thus we can define an injective embedding .
Operation splitting leaves the set of values from to , as well as from to unchanged. Similarly, there is a bijection between values in and and thus between edges in and . The pattern embedding hence defines an injective map from tree edges in to tree edges in . We extend this map to a map on the trees by induction over the nodes set of . We start by the root map . Using , we can then uniquely define the image of any child node of in , and so forth inductively.
We show now that the map thus defined is injective. Suppose are nodes in such that . By the inductive construction there are paths from the root to and respectively such that their image under are two paths from to . But is a tree, so both paths must be equal. By bijectivity of , it follows , and thus is injective. Finally, the value and operation weights are invariant under pattern embedding and thus are preserved by definition.
This result means that instead of listing all PSGs for every possible subgraph of , it is sufficient to proceed as follows:
- for every pattern , fix a root operation and construct the rooted
tree dual of the canonical PSG
- enumerate every possible root operation in ,
- enumerate every possible sets of anchors in with root ,
- for each set , find the maximal PSG with anchors in , and take its rooted tree dual ,
- find all patterns such that .
In other words, if AllAnchors is a procedure that enumerates all possible sets
of anchors in and MaximalPathSplit computes the maximal PSG as
presented in the proof of Proposition 4.9, then
AllPathSplits(G) can simply be obtained by calling AllAnchors and then
returning their respective maximal PSGs in :
def AllPathSplits(G: Graph, root: Operation) -> Set[Graph]:
all_anchors = AllAnchors(G, root)
return {MaximalPathSplit(G, pi) for pi in all_anchors}
The missing piece: AllAnchors
#
We can now complete the definition of AllPathSplits by defining the
AllAnchors procedure, which enumerates all possible sets of anchors in
given a root operation .
The procedure is similar to CanonicalAnchors, described in detail in the
previous paragraphs. In addition to the arguments of CanonicalAnchors,
AllAnchors requires a width argument. It then returns all sets of
at most operations1 that form the canonical anchors of some
width- subgraph of with root . The main difference between
CanonicalAnchors and AllAnchors is that the successive recursive calls (line
22 in CanonicalAnchors) are replaced by a series of nested loops (lines 42–48
in AllAnchors) that exhaustively iterate over the possible outcomes for
different subgraphs of . The results of every possible combination of
recursive calls are then collected into a list of anchor sets, which is
returned.
The part of the pseudocode that is without comments is unchanged from
CanonicalAnchors. Using Proposition 4.3, we
know that we can assume that every operation has at most 3 children, and thus 3
neighbours in G, given that the operations equivalent to parent nodes were
removed.
1def AllAnchors(
2 G: Graph, root: Operation, w: int,
3 seen_paths: Set[int] = {}
4) -> List[(Set[Operation], Set[int], Graph)]:
5 # Base case: return one empty anchor list
6 if w == 0:
7 return [({}, {}, G)]
8
9 operations = Operations(ConnectedComponent(G, root))
10 sorted_operations := sort(operations)
11 operations_queue := queue(sorted_operations)
12
13 op := operations_queue.pop()
14 G = RemoveOperation(G, op)
15 while len(LinearPaths(G, op)) == 1 or
16 issubset(LinearPaths(G, op), seen_paths):
17 op = operations_queue.pop() or return [({}, {}, G)]
18 G = RemoveOperation(G, op)
19
20 seen0 = union(seen_paths, LinearPaths(G, op))
21 # There are always at most three neighbours: we
22 # unroll the for loop of CanonicalAnchors.
23 [child1, child2, child3] = Neighbours(G, op)
24 # Iterate over all ways to split w-1 anchors over
25 # the three children and solve recursively
26 all_anchors = []
27 for 0 <= w1, w2, w3 < w with w1 + w2 + w3 == w - 1:
28 for (anchors1, seen1, G1) in
29 AllAnchors(G, child1, w1, seen0):
30 for (anchors2, seen2, G2) in
31 AllAnchors(G1, child2, w2, seen1):
32 for (anchors3, seen3, G3) in
33 AllAnchors(G2, child3, w3, seen2):
34 # Concatenate new anchor with anchors from all paths
35 anchors = union([op], anchors1, anchors2, anchors3)
36 all_anchors.push((anchors, seen3, G3))
37 return all_anchors
We can represent the sequence of recursive calls to AllAnchors as a tree. The
call tree for the graph used as example to illustrate CanonicalAnchors earlier
is given on the next page.
We now show correctness of the procedure. Let us write for the set
of sets of anchors returned by AllAnchors(G, r, w, {}).
Let be a graph and be a subgraph of of width . Let be a choice of root operation in . We have
A call tree for an execution of AllAnchors on the example graph of the previous figure with . Starting from the root, each node in the tree corresponds to either picking an operation as anchor or not (thus splitting it). Edges are labelled by the values assigned to for the respective children of the source node. One path from root to leaf leads to no solution (it is impossible to find an unseen linear path from operation . The other paths each lead to a valid set of three anchors.
The proof is by induction over the width of the subgraph . The idea is to
map every recursive call in CanonicalAnchors to one of the calls to
AllAnchors on lines 29, 31 or 33. All recursive results are concatenated on
line 36, and thus, the value returned by CanonicalAnchors will be one of the
anchor sets in the list returned by AllAnchors.
Proof
Let be a connected subgraph of of width . We prove
inductively over that if CanonicalAnchors$(H,
r,S)$
then there is a graph such that such
that
AllAnchors
for all valid root operations of and all subsets of the linear paths of
in seen_paths. The statement in the proposition directly follows this
claim.
For the base case , CanonicalAnchors will return the anchors
anchors = [op] as defined on line 19: there is only one linear path, and it is
already in seen_paths, thus for every recursive call to CanonicalAnchors,
the while condition on line 12 will always be satisfied until all operations
have been exhausted and empty sets are returned. In AllAnchors, on the other
hand, The only values of w1, w2 and w3 that satisfy the loop condition on
line 27 for are w1 w2 w3 . As a result, given the w
base case on lines 6–7, the lines 35 and 36 of AllAnchors are only
executed once, and the definition of anchors on line 36 is equivalent to its
definition in CanonicalAnchors.
We now prove the claim for by induction. As documented in AllAnchors,
we can assume that every operation has at most 3 children. This simplifies the
loop on lines 21–25 of CanonicalAnchors to, at most, three calls to
CanonicalAnchors.
Consider a call to CanonicalAnchors for a graph , a root
operation in and a set of linear paths. Let , and
be the length of the values returned by the three recursive calls to
CanonicalAnchors of line 22 for the execution of CanonicalAnchors with
arguments , and . Let and be the three neighbours of
in . If the child does not exist, then one can set and it
can be ignored – the argument below still holds in that case. The definition of
seen0 on line 20 in AllAnchors coincides with the update to the variable
seen_paths on line 18 of CanonicalAnchors; similarly, the updates to G on
lines 14 and 18 of AllAnchors are identical to the lines 11 and 15 of
CanonicalAnchors that update H. Let the updated seen_paths be the set
, the updated G be and the updated be , with
.
As every anchor operation reduces the number of unseen linear paths by exactly
one (using the simplifying assumptions of
section 4.2), it must hold that
. Thus, for a call to AllAnchors with the arguments
, , and , there is an iteration of the for loop on line 27 of
AllAnchors such that w1 , w2 and w3 . It follows
that on line 29 of AllAnchors, the procedure is called recursively with
arguments . From the induction hypothesis, we obtain that
there is an iteration of the for loop on line 29 in which the values of
anchors1 and seen1 coincide with the values of the new_anchors and
seen_paths variables after the first iteration of the for loop on line 21 of
CanonicalAnchors. Call the value of seen1 (and seen_paths) .
Similarly, call the updated value of G in AllAnchors and the updated
value of G in CanonicalAnchors . We have, by the induction hypothesis,
that .
Repeating the argument, we obtain that there are iterations of the for loops
on lines 30 and 32 of AllAnchors that correspond to the second and third
recursive calls to CanonicalAnchors on line 22 of the procedure. Finally, the
concatenation of anchor lists on line 36 of AllAnchors is equivalent to the
repeated concatenations on line 25 of CanonicalAnchors, and so we conclude
that the induction hypothesis holds for .
We will see that the overall runtime complexity of AllAnchors can be easily
derived from a bound on the size of the returned list. For this, we use the
following result:
AllAnchorsAllAnchors is in , where
is a constant.Proof
Let be an upper bound for the length of the list returned by a call to
AllAnchors for width . For the base case , . The returned
all_anchors list is obtained by pushing anchor lists one by one on line 36. We
can count the number of times this line is executed by multiplying the length of
the lists returned by the recursive calls on lines 28–32, giving us the
recursion relation
Since is meant to be an upper bound, we replace with equality above to obtain a recurrence relation for . This recurrence relation is a generalisation of the well-known Catalan numbers Stanley, 2015. 2015. Catalan Numbers. Cambridge University Press. doi: 10.1017/CBO9781139871495, equivalent to counting the number of ternary trees with internal nodes: a ternary tree with internal nodes is made of a root along with three subtrees with and internal nodes respectively, with . A closed form solution to this problem can be found in Aval, 2008. 2008. Multivariate Fuss–Catalan numbers. Discrete Mathematics 308, 20 (October 2008, 4660–4669). doi: 10.1016/j.disc.2007.08.100:
satisfying the above recurrence relation with equality, where is a constant obtained from the Stirling approximation:
To obtain a runtime bound for AllAnchors, it is useful to identify how much of
needs to be traversed. If we suppose all patterns have at most depth ,
then it immediately follows that any operation in that is in the image of a
pattern embedding must be at most a distance away from an anchor operation.
We can thus equivalently call AllAnchors on a subgraph of such that no
linear path is longer than . We thus obtain the following runtime.
AllAnchorsFor
patterns with at most width and depth , the total runtime of AllAnchors
is in
Proof
We restrict Operations on line 9 to only return the first operations on
the linear path in each direction, starting at the anchor operation: operations
more than distance away from the anchor cannot be part of a pattern of depth
.
We use the bound on the length of the list returned by calls to AllAnchors of
Proposition 4.10 to bound the runtime. We can ignore the
non-constant runtime of the concatenation of the outputs of recursive calls on
line 35, as the total size of the outputs is asymptotically at worst of the same
complexity as the runtime of the recursive calls themselves. Excluding the
recursive calls, the only remaining lines of AllAnchors that are not executed
in constant time are the while loop on lines 15–18 and the Operations and
sort calls on lines 9–11. Using the same argument as in CanonicalAnchors,
we can ignore the latter two calls by replacing the queue of operations by a
lazy iterator of operations. The next operation given op and the graph G can
always be computed in time using a depth-first traversal of G.
Consider the recursion tree of AllAnchors, i.e. the tree in which the nodes
are the recursive calls to AllAnchors and the children are the executions
spawned by the nested for loops on line 28–32. This tree has at most
leaves. A path from the root to a leaf corresponds to a stack of recursive calls
to AllAnchors. Along this recursion path, seen_paths set is always strictly
growing (line 35) and the operations removed from G on lines 14 and 18 are all
distinct. For each linear path, at most operations are traversed. Thus the
total runtime of the while loop (lines 15–18) along a path from root to leaf
in the recursion tree is in . We can thus bound the overall
complexity of executing the entire recursion tree by
.
Every anchor operation is on at least one previously unseen linear path, thus there can be at most operations in the set of anchors. ↩︎
4.6. An automaton for multi-pattern matching
We have shown in the previous sections that graph pattern-matching can be reduced to a problem of tree inclusions, with trees of fixed width . To complete the pattern-matching algorithm, we must provide a fast way to evaluate the subtree relation for many trees representing the set of all patterns we wish to match.
More precisely, for patterns with width , fix a root operation in for each and consider the rooted tree duals of the canonical PSGs , with the canonical anchors. Then given a subject graph , we wish to compute the set
for all anchor sets and root operation in . This
corresponds to the IsSubTree predicate introduced in the sketch of the
algorith in section 4.4.
Instead of considering the trees of PSGs, it will prove easier to consider the contracted PSGs (cPSGs)
Such tree inclusions are equivalent to finding embeddings in the subject graph itself, provided that we keep track of the and weight maps (see section 4.3).
It will be useful to remind ourselves the following properties of contracted PSGs. Every operation of a cPSG (and thus every node in its rooted dual tree) is an anchor operation of the PSG. Per Proposition 4.4, the rooted dual tree of a cPSG is a ternary tree and has exactly nodes. Finally, recall the concept of an open value of a graph, i.e. a value that is missing either a use or define operation (see section 4.2).
Reduction of tree inclusion to string prefix matching #
Now consider two contracted spanning tree reductions and with values and . To simplify notation, define
for some choice of root operations and in and , respectively. We lift the relation on rooted dual trees of PSGs introduced in section 4.5 to rooted dual trees of cPSGs in Such a way that there is an inclusion relation between two rooted dual trees of PSGs if and only if the same relation holds on the rooted duals of cPSGs.
We say that if and only if
- the trees share the same root operation,
- is a subtree of ,
- the map coincides on the common subtree, and
- the map satisfies for all :
where designates the embedding of into given by the tree embedding.
The first three conditions are taken as-is from the relation on non-contracted trees, whilst the fourth condition on the map is specific to contracted trees.
Using Proposition 4.2, there are at most 2 open values for each linear path in the graph, and thus at most open values in a rooted dual tree of a cPSG of width . For each such contracted rooted dual, we can thus define a contracted string tuple given by the values of the map evaluated in the (up to) open values1.
If is the restriction of to the domain of definition of non-open values of a cPSG, the fourth condition for the inclusion relation on rooted dual cPSGs, given above becomes an equality condition when restricted to non-open values. A special case of this property of particular interest to us is stated as the following result. The relation on strings refers to prefix inclusion, i.e. if and only if is a prefix of .
Let and be the contracted string tuples of and respectively. Then if and only if the trees share the same root, are isomorphic, have the same and maps and for all : .
The proof of this follows directly from observing that rooted duals of cPSGs have the same set of nodes and that the restriction to non-open values must satisfy equality.
Why restricting ourselves to trees of the same width ? It is sufficient for our purposes! All patterns are of width by assumption and so are the rooted dual trees of the form , given that .
The string prefix matching problem is a simple computational task that can be generalised to check for multiple string patterns at the same time using a prefix tree. An overview of this problem can be found in appendix A. We can thus obtain a solution for the pattern matching problem for patterns:
As above, let
- be a graph, with a set of operations and a choice of root operation,
- be patterns of width and depth , with choices of root operations and canonical anchors
The set of all pattern embeddings mapping the canonical anchor set to and root to for can be computed in time using at most pre-computed prefix tree of size at most , each constructed in time complexity .
Proof
For each pattern, we consider its canonical spanning tree reduction and construct a multi-dimensional prefix tree (see Appendix ) for each group of patterns that share the same spanning tree reduction.
Given a graph , we can compute the cPSG of for anchors and map its rooted dual tree to the corresponding prefix tree. This can be done in time by using a search tree. We can restrict to a graph of size by truncating the linear paths to at most length, as in the proof of Proposition 4.12. Thus we can assume .
The rest of the proof and the runtime follow from the multi-dimensional prefix tree construction detailed in Appendix ).
Combining everything #
Finally, putting Proposition . and Proposition 4.12 together, we obtain our main result.
Let be patterns with width and depth . The pre-computation runs in time and space complexity
For any subject graph , the pre-computed prefix tree can be used to find all pattern embeddings in time
where is a constant.
Proof
The pre-computation consists of running the CanonicalAnchors procedure on each
of the patterns and then transforming them into a map of prefix trees
using Proposition .. By
Proposition 4.7, CanonicalAnchors runs in
for each pattern, where we used that
for all patterns. The total runtime of prefix construction is thus
The complexity of pattern matching itself on the other hand is composed of two
parts: the computation of all possible anchor sets , and the
execution of the prefix string matcher for each of the trees resulting from
these sets . As AllAnchors must be run for every choice of
root vertex in , the runtime is thus obtained by multiplying i)
with ii) the runtime of the prefix tree matching
(Proposition .), and with iii) ,
i.e. the number of anchor lists returned by AllAnchors
(Proposition 4.10):
where is the bound for the number of anchor lists returned by
AllAnchors. The result follows.
The values can be ordered as usual by using the total lexicographic order on port labels of the tree. ↩︎
4.7. Benchmarks
Proposition 4.13 shows that pattern-independent matching can scale to large datasets of patterns but imposes some restrictions on the patterns and embeddings that can be matched. In this section, we discuss these limitations and give empirical evidence that the pattern-matching approach we have presented can be used on a large scale and outperform existing solutions.
Pattern limitations #
In section 4.2, we imposed conditions on the pattern embeddings to obtain a complexity bound for pattern-independent matching. We argued how these restrictions are natural for applications in quantum computing, and most of the arguments will also hold for a much broader class of computation graphs.
In future work, it would nonetheless be of theoretical interest to explore the importance of these assumptions and their impact on the complexity of the problem. As a first step towards a generalisation, our implementation and all our benchmarks in this section do not make any of these simplifying assumptions. Our results below give empirical evidence that a significant performance advantage can be obtained regardless.
Implementation #
We provide an open-source implementation in Rust of pattern independent matching using the results of this chapter, available on GitHub. The code and datasets used for the benchmarks themselves are available in a dedicated repository.
The implementation works for weighted or unweighted port graphs – of which typed minIR graphs are a special case – and makes none of the simplifying assumptions employed in the theoretical analysis. Pattern matching proceeds in two phases: precomputation and runtime.
Precomputation. In a first step, all graph patterns are processed and compiled into a single state automaton that will be used at runtime for fast pattern independent matching. The automaton in the implementation combines in one data structure two distinct computations of this chapter:
- the recursive branching logic used in the
AllAnchorsprocedure to enumerate all possible choices of anchors. - the automaton described in section 4.6 that matches patterns for a fixed set of anchors, and
The former is implemented with non-deterministic state transitions – each transition corresponding to choosing an additional anchor – , whereas the latter is implemented deterministically.
Concretely, the automaton is constructed by following the construction of section 4.4 to decompose each pattern into its canonical path-split graph. We then order the nodes of the PSG and express each node as a condition that ensures the connectivity and node weight in the graph matches the pattern. We thus obtain a chain of conditions, with a transition between any two consecutive conditions; transitions are deterministic by default and marked as non-deterministic whenever they lead to a condition on an anchor node. The state automaton for all patterns is then obtained by joining all chains of conditions into a tree.
Runtime. Pattern matching is then as simple as simulating the state automaton, evaluating all conditions on the graph passed as input. The states in the automaton corresponding to the last condition of a pattern must be marked as end states, along with a label identifying the pattern that was matched. This can then be used at runtime to report all patterns found.
Our implementation has been tested for correctness, i.e. on the one hand that all matches that are reported are correct, and on the one hand that all pattern matches are found. This was done by comparing the matches of our implementation with the results obtained from matching every pattern separately on millions of randomly generated graphs and edge cases. We also ensured during benchmarking that the number of matches reported by our implementation and by Quartz were always the same.
Benchmarks #
Baseline. To assess practical use, we have benchmarked our implementation against a leading C++ implementation of pattern matching for quantum circuits from the Quartz superoptimiser project Xu, 2022. 2022. Quartz: Superoptimization of Quantum Circuits. In Proceedings of the 43rd ACM SIGPLAN International Conference on Programming Language Design and Implementation, June 2022. Association for Computing Machinery, 625--640. doi: 10.1145/3519939.3523433. This implementation is the principal component of an end-to-end quantum circuit optimisation pipeline. The results and speedups we obtain here thus apply and transfer directly to this application.
Dataset. We further ensure that our results apply in practice by using a real-world dataset of patterns. The Quartz optimiser finds opportunities for circuit optimisation by relying on precomputed equivalence classes of circuits (ECC). These are obtained exhaustively by enumerating all possible small quantum circuits, computing their unitaries and clustering them into classes of circuits with identical unitaries.
The generation of ECC sets is parametrised on the number of qubits, the maximum number of gates and the gate set in use. For these benchmarks we chose the minimal set of gates and considered circuits with up to 6 gates and 2, 3 or 4 qubits. The size of these pattern circuits is typical for the application1.
Thus, for our patterns, we have the bound for the maximum depth and
width . In all experiments, the graph subject to pattern matching
was barenco_tof_10 input, i.e. a 19-qubit circuit input with 674 gates
obtained by decomposing a 10-qubit Toffoli gate using the Barenco decomposition
Barenco, 1995. 1995. Elementary gates for quantum computation. Physical Review A 52, 5 (November 1995, 3457--3467). doi: 10.1103/PhysRevA.52.3457.
Results. We study the runtime of our implementation as a function of the number of patterns being matched, up to patterns. We expect the runtime of pattern matching algorithms that match one pattern at a time to scale linearly with . On the other hand, Proposition 4.13 results in a complexity that is independent of .
For each value of , we select a subset of all patterns in the ECC sets at random. For , there are only a total of patterns, explaining why we do not report result beyond that number. For patterns, our proposed algorithm is faster than Quartz. As expected, the advantage of our approach increases as we match more patterns, scaling up to a speedup for . The results are summarised in the following figure:
Runtime of pattern matching for patterns on 2, 3 and 4 qubit quantum circuits from the Quartz ECC dataset, for our implementation (Portmatching) and the Quartz project. All two-qubit circuits were used, whereas for 3 and 4 qubit circuits, random samples were drawn.
Dependency on and . We further study the runtime of our algorithm as a function of its two main parameters, the number of patterns and the pattern width , on an expanded dataset. To this end, we generate random sets of 10,000 pattern circuits with 15 gates and between and qubits, using the same gate set as previously. The resulting pattern matching runtimes are shown in the figure below.
From Proposition 4.13, we expect that the pattern matching runtime is upper bounded by a -independent constant. Our results support this result for and qubit patterns, where runtime seems indeed to saturate, reaching an observable runtime plateau at large .
We suspect on the other hand that the exponential dependency in the complexity bound of Proposition 4.13 makes it difficult to observe a similar plateau for , as we expect this upper bound on the runtime to increase rapidly with qubit counts . A runtime ceiling is not directly observable at this experiment size, but the gradual decrease in the slope of the curve is consistent with the existence of the -independent upper bound predicted in Proposition 4.13.
Runtime of our pattern matching for random quantum circuits with up to 10 qubits.
Such small circuit sizes are imposed in part by the fact that ECCs of larger circuits quickly become unfeasible to generate as their number grows exponentially. In practice, large circuit transformations can often be expressed as the composition of smaller atomic transformations, hence the good performance of this approach in practice. ↩︎