Conversation
Use getattr with default None instead of direct attribute access, which raises AttributeError on NodeStorage objects without an x attribute. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
|
||
| if num_nodes == 0: | ||
| for node_type in data.node_types: | ||
| data[node_type][self.attr_name] = torch.zeros( |
There was a problem hiding this comment.
it's currently attached as separate attributed to each node type, we could also have it attach to x by feeding in different parameters to add_node_attr(data, pe, self.attr_name=None) . Can add that function if needed
| class AddHeteroHopDistanceEncoding(BaseTransform): | ||
| r"""Adds hop distance positional encoding as relative encoding (sparse). | ||
|
|
||
| For each pair of nodes (vi, vj), computes the shortest path distance p(vi, vj). |
There was a problem hiding this comment.
Shall we also allow users to pass a list of anchor nodes, and return other nodes's distance to this list of anchor nodes?
There was a problem hiding this comment.
For distance to anchor, I think that can be dealt with while generating the sequence, since a nodes distance to anchor in each sequence is going to be different, we can't just attach it to node features at this level, and can only be derived during sequence generation.
This currently is a relative encoding that is meant to be used for attention biasing. I think we can make a hop distance to anchor only PE if full hop distance runs into memory issues. (testing now)
| frontier = adj.coalesce() | ||
|
|
||
| # Track all visited pairs using sparse tensor (value > 0 means visited) | ||
| visited = torch.sparse_coo_tensor( |
There was a problem hiding this comment.
nit: add param name here for readability
Scope of work done
Where is the documentation for this feature?: N/A
Did you add automated tests or write a test plan?
Updated Changelog.md? NO
Ready for code review?: NO