question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

EncodeProcessDecode model documentation

See original GitHub issue

Hi, I’ve been trying to understand the EncodeProcessDecode model parameters, however, the documentation is a little scarce on the parameters (edge_output_size, node_output_size, global_output_size) in this case.

In the code I found the following comment # Transforms the outputs into the appropriate shapes. but that doesn’t clarify the exact reshaping. Can anybody clarify?

Thanks a lot!

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:7

github_iconTop GitHub Comments

3reactions
alvarosgcommented, Nov 10, 2018

The short answer is yes, since the graph network treats all nodes equally (except for the actual values of their features), so the MLPs (or other models) that process nodes and edges, have to assume a fixed size.

However, you should think carefully about how you do the padding. For examples: lets say you have nodes of two types, with a set of shared features (x_i), but some some features that are specific to each node type (y_i, and z_i): type_1_features: (x_1, x_2, y_1, y_2) and the other type has features: type_2_features: (x_1, x_2, z_1)

In this case you would probably do the padding as: type_1_features: (x_1, x_2, y_1, y_2, 0) and the other type has features: type_2_features: (x_1, x_2, 0, 0, z_1)

so the shared features are aligned and the non-shared features have their own slots.

Additionally, you may want to include a one hot encoding of the type of nodes to help the network reason about the different node types, so the final features would look like:

In this case you would probably do the padding as: type_1_features_padded: (1, 0, x_1, x_2, y_1, y_2, 0) and the other type has features: type_2_features_padded: (0, 1, x_1, x_2, 0, 0, z_3)

In some cases, if you have many node types, and each of them have their own different features, you may not be able to afford having different slots for the different features. In these cases it should be good enough to just pad with zeros at the end, and let the network use the one-hot encodings to do the reasoning.

0reactions
ferreirafabiocommented, Nov 10, 2018

That helped. Appreciate the response!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Encoder Decoder Models — transformers 3.0.2 documentation
This class can wrap an encoder model, such as BertModel and a decoder modeling with a language modeling head, such as BertForMaskedLM into...
Read more >
Graph-based Task-specific Prediction Models for Interactions ...
Graph-based Task-specific Prediction Models for Interactions between Deformable and Rigid Objects. Abstract: Capturing scene dynamics and predicting the ...
Read more >
CERN Summer Student Project Report - Albert Sund Aillet
In all of these subprojects, a so-called EncodeProcessDecode model from the Graph Nets Library was used with different numbers of processing ...
Read more >
Deep learning of graph transformations - BME
4.1 Encode Process Decode model . ... In chapter 3 I documented some of the ... with and I documented their results in...
Read more >
Masters Thesis: Design of a Graph Neural Network
Graph Neural Networks are a unique type of Deep Learning models that have a capability to ... 4-2-3 Message Passing in Encode-Process-Decode Architecture...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found