spacetimeformer.spacetimeformer_model.nn package
- class spacetimeformer.spacetimeformer_model.nn.attn.AttentionLayer(attention, d_model, n_heads, dropout_qkv=0.0, d_keys=None, d_values=None, mix=False)[source]
Bases:
torch.nn.modules.module.Module
- forward(queries, keys, values, attn_mask, output_attn=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.attn.BenchmarkAttention(*args, **kwargs)[source]
Bases:
torch.nn.modules.module.Module
- forward(queries, keys, values, attn_mask, output_attn=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.attn.FullAttention(mask_flag=False, scale=None, attention_dropout=0.1)[source]
Bases:
torch.nn.modules.module.Module
- forward(queries, keys, values, attn_mask, output_attn=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.attn.LocalAttentionLayer(attention, d_y, d_model, n_heads, dropout_qkv=0.0)[source]
Bases:
torch.nn.modules.module.Module
- forward(queries, keys, values, attn_mask=None, output_attn=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.attn.NystromSelfAttention(d_model, n_heads, num_landmarks=256, pinv_iterations=6, attention_dropout=0.0, residual=False, residual_conv_kernel=33, eps=1e-08)[source]
Bases:
torch.nn.modules.module.Module
- forward(x, x_, x__, attn_mask=None, output_attn=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.attn.PerformerAttention(mask_flag=False, dim_heads=None, ortho_scaling=0, feature_redraw_interval=1000, kernel='softmax')[source]
Bases:
performer_pytorch.performer_pytorch.FastAttention
- forward(queries, keys, values, attn_mask, output_attn=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.attn.ProbAttention(mask_flag=True, factor=5, scale=None, attention_dropout=0.1)[source]
Bases:
torch.nn.modules.module.Module
- forward(queries, keys, values, attn_mask, output_attn=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.data_dropout.DataDropout(dropout=None)[source]
Bases:
torch.nn.modules.module.Module
- forward(embed)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.decoder.Decoder(layers, norm_layer=None, emb_dropout=0.0, data_dropout=0.0)[source]
Bases:
torch.nn.modules.module.Module
- forward(val_time_emb, space_emb, cross, x_mask=None, cross_mask=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.decoder.DecoderLayer(global_self_attention, local_self_attention, global_cross_attention, local_cross_attention, d_model, d_ff=None, dropout_ff=0.1, activation='relu', post_norm=True, norm='layer')[source]
Bases:
torch.nn.modules.module.Module
- forward(x, cross, x_mask=None, cross_mask=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.embed.SpacetimeformerEmbedding(d_y, d_x, d_model=256, time_emb_dim=6, method='spatio-temporal', downsample_convs=1, start_token_len=0, null_value=None)[source]
Bases:
torch.nn.modules.module.Module
- GIVEN = True
- SPACE = True
- TIME = True
- VAL = True
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.encoder.DownsampleConv(c_in)[source]
Bases:
torch.nn.modules.module.Module
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.encoder.Encoder(attn_layers, conv_layers, norm_layer, emb_dropout=0.0, data_dropout=0.0)[source]
Bases:
torch.nn.modules.module.Module
- forward(val_time_emb, space_emb, attn_mask=None, output_attn=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.encoder.EncoderLayer(global_attention, local_attention, d_model, d_ff=None, dropout_ff=0.1, activation='relu', post_norm=True, norm='layer')[source]
Bases:
torch.nn.modules.module.Module
- forward(x, attn_mask=None, output_attn=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.encoder.Normalization(method, d_model=None)[source]
Bases:
torch.nn.modules.module.Module
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.encoder.VariableDownsample(d_y, d_model)[source]
Bases:
torch.nn.modules.module.Module
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.model.Spacetimeformer(d_y: int = 1, d_x: int = 4, start_token_len: int = 64, attn_factor: int = 5, d_model: int = 512, n_heads: int = 8, e_layers: int = 2, d_layers: int = 2, d_ff: int = 512, time_emb_dim: int = 6, dropout_emb: float = 0.05, dropout_token: float = 0.05, dropout_attn_out: float = 0.05, dropout_ff: float = 0.05, dropout_qkv: float = 0.05, global_self_attn: str = 'performer', local_self_attn: str = 'none', global_cross_attn: str = 'performer', local_cross_attn: str = 'none', performer_attn_kernel: str = 'relu', performer_redraw_interval: int = 250, embed_method: str = 'spatio-temporal', activation: str = 'gelu', post_norm: bool = True, norm: str = 'layer', initial_downsample_convs: int = 0, intermediate_downsample_convs: int = 0, device=device(type='cuda', index=0), null_value: Optional[float] = None, verbose: bool = True)[source]
Bases:
torch.nn.modules.module.Module
- forward(x_enc, x_mark_enc, x_dec, x_mark_dec, enc_self_mask=None, dec_self_mask=None, dec_enc_mask=None, output_attention=False)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
PowerNorm code from https://github.com/sIncerass/powernorm/blob/master/fairseq/modules/norms/mask_powernorm.py
- class spacetimeformer.spacetimeformer_model.nn.powernorm.MaskPowerNorm(num_features, eps=1e-05, alpha_fwd=0.9, alpha_bkw=0.9, affine=True, warmup_iters=10000, group_num=1)[source]
Bases:
torch.nn.modules.module.Module
An implementation of masked batch normalization, used for testing the numerical stability.
- extra_repr()[source]
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(input, pad_mask=None, is_encoder=False)[source]
- input: T x B x C -> B x C x T
: B x C x T -> T x B x C
pad_mask: B x T (padding is True)
- training: bool
- class spacetimeformer.spacetimeformer_model.nn.scalenorm.ScaleNorm(dim, eps=1e-05)[source]
Bases:
torch.nn.modules.module.Module
- forward(x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool