spacetimeformer package

class spacetimeformer.callbacks.TeacherForcingAnnealCallback(start, end, epochs)[source]

Bases: pytorch_lightning.callbacks.base.Callback

classmethod add_cli(parser)[source]
on_validation_epoch_end(trainer, model)[source]

Called when the val epoch ends.

class spacetimeformer.callbacks.TimeMaskedLossCallback(start, end, steps)[source]

Bases: pytorch_lightning.callbacks.base.Callback

classmethod add_cli(parser)[source]
on_train_batch_end(trainer, model, *args)[source]

Called when the train batch ends.

on_train_start(trainer, model)[source]

Called when the train begins.

property time_mask
spacetimeformer.eval_stats.gmae(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Geometric Mean Absolute Error

spacetimeformer.eval_stats.gmrae(actual: numpy.ndarray, predicted: numpy.ndarray, benchmark: Optional[numpy.ndarray] = None)[source]

Geometric Mean Relative Absolute Error

spacetimeformer.eval_stats.inrse(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Integral Normalized Root Squared Error

spacetimeformer.eval_stats.maape(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Mean Arctangent Absolute Percentage Error

Note: result is NOT multiplied by 100

spacetimeformer.eval_stats.mad(actual: numpy.ndarray, predicted: numpy.ndarray)

Mean Absolute Error

spacetimeformer.eval_stats.mae(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Mean Absolute Error

spacetimeformer.eval_stats.mape(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Mean Absolute Percentage Error

Properties:
  • Easy to interpret

  • Scale independent

  • Biased, not symmetric

  • Undefined when actual[t] == 0

Note: result is NOT multiplied by 100

spacetimeformer.eval_stats.mase(actual: numpy.ndarray, predicted: numpy.ndarray, seasonality: int = 1)[source]

Mean Absolute Scaled Error

Baseline (benchmark) is computed with naive forecasting (shifted by @seasonality)

spacetimeformer.eval_stats.mbrae(actual: numpy.ndarray, predicted: numpy.ndarray, benchmark: Optional[numpy.ndarray] = None)[source]

Mean Bounded Relative Absolute Error

spacetimeformer.eval_stats.mda(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Mean Directional Accuracy

spacetimeformer.eval_stats.mdae(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Median Absolute Error

spacetimeformer.eval_stats.mdape(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Median Absolute Percentage Error

Note: result is NOT multiplied by 100

spacetimeformer.eval_stats.mdrae(actual: numpy.ndarray, predicted: numpy.ndarray, benchmark: Optional[numpy.ndarray] = None)[source]

Median Relative Absolute Error

spacetimeformer.eval_stats.me(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Mean Error

spacetimeformer.eval_stats.mpe(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Mean Percentage Error

spacetimeformer.eval_stats.mrae(actual: numpy.ndarray, predicted: numpy.ndarray, benchmark: Optional[numpy.ndarray] = None)[source]

Mean Relative Absolute Error

spacetimeformer.eval_stats.mre(actual: numpy.ndarray, predicted: numpy.ndarray, benchmark: Optional[numpy.ndarray] = None)[source]

Mean Relative Error

spacetimeformer.eval_stats.mse(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Mean Squared Error

spacetimeformer.eval_stats.nrmse(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Normalized Root Mean Squared Error

spacetimeformer.eval_stats.rae(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Relative Absolute Error (aka Approximation Error)

spacetimeformer.eval_stats.rmdspe(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Root Median Squared Percentage Error

Note: result is NOT multiplied by 100

spacetimeformer.eval_stats.rmse(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Root Mean Squared Error

spacetimeformer.eval_stats.rmspe(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Root Mean Squared Percentage Error

Note: result is NOT multiplied by 100

spacetimeformer.eval_stats.rmsse(actual: numpy.ndarray, predicted: numpy.ndarray, seasonality: int = 1)[source]

Root Mean Squared Scaled Error

spacetimeformer.eval_stats.rrse(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Root Relative Squared Error

spacetimeformer.eval_stats.smape(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Symmetric Mean Absolute Percentage Error

Note: result is NOT multiplied by 100

spacetimeformer.eval_stats.smdape(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Symmetric Median Absolute Percentage Error

Note: result is NOT multiplied by 100

spacetimeformer.eval_stats.std_ae(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Normalized Absolute Error

spacetimeformer.eval_stats.std_ape(actual: numpy.ndarray, predicted: numpy.ndarray)[source]

Normalized Absolute Percentage Error

spacetimeformer.eval_stats.umbrae(actual: numpy.ndarray, predicted: numpy.ndarray, benchmark: Optional[numpy.ndarray] = None)[source]

Unscaled Mean Bounded Relative Absolute Error

class spacetimeformer.forecaster.Forecaster(learning_rate: float = 0.001, l2_coeff: float = 0, loss: str = 'mse', linear_window: int = 0)[source]

Bases: pytorch_lightning.core.lightning.LightningModule, abc.ABC

classmethod add_cli(parser)[source]
compute_loss(batch: Tuple[torch.Tensor], time_mask: Optional[int] = None, forward_kwargs: dict = {}) Tuple[torch.Tensor][source]
configure_optimizers()[source]

Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple.

Returns

Any of these 6 options.

  • Single optimizer.

  • List or Tuple of optimizers.

  • Two lists - The first list has multiple optimizers, and the second has multiple LR schedulers (or multiple lr_scheduler_config).

  • Dictionary, with an "optimizer" key, and (optionally) a "lr_scheduler" key whose value is a single LR scheduler or lr_scheduler_config.

  • Tuple of dictionaries as described above, with an optional "frequency" key.

  • None - Fit will run without any optimizer.

The lr_scheduler_config is a dictionary which contains the scheduler and its associated configuration. The default configuration is shown below.

lr_scheduler_config = {
    # REQUIRED: The scheduler instance
    "scheduler": lr_scheduler,
    # The unit of the scheduler's step size, could also be 'step'.
    # 'epoch' updates the scheduler on epoch end whereas 'step'
    # updates it after a optimizer update.
    "interval": "epoch",
    # How many epochs/steps should pass between calls to
    # `scheduler.step()`. 1 corresponds to updating the learning
    # rate after every epoch/step.
    "frequency": 1,
    # Metric to to monitor for schedulers like `ReduceLROnPlateau`
    "monitor": "val_loss",
    # If set to `True`, will enforce that the value specified 'monitor'
    # is available when the scheduler is updated, thus stopping
    # training if not found. If set to `False`, it will only produce a warning
    "strict": True,
    # If using the `LearningRateMonitor` callback to monitor the
    # learning rate progress, this keyword can be used to specify
    # a custom logged name
    "name": None,
}

When there are schedulers in which the .step() method is conditioned on a value, such as the torch.optim.lr_scheduler.ReduceLROnPlateau scheduler, Lightning requires that the lr_scheduler_config contains the keyword "monitor" set to the metric name that the scheduler should be conditioned on.

Metrics can be made available to monitor by simply logging it using self.log('metric_to_track', metric_val) in your LightningModule.

Note

The frequency value specified in a dict along with the optimizer key is an int corresponding to the number of sequential batches optimized with the specific optimizer. It should be given to none or to all of the optimizers. There is a difference between passing multiple optimizers in a list, and passing multiple optimizers in dictionaries with a frequency of 1:

  • In the former case, all optimizers will operate on the given batch in each optimization step.

  • In the latter, only one optimizer will operate on the given batch at every step.

This is different from the frequency value specified in the lr_scheduler_config mentioned above.

def configure_optimizers(self):
    optimizer_one = torch.optim.SGD(self.model.parameters(), lr=0.01)
    optimizer_two = torch.optim.SGD(self.model.parameters(), lr=0.01)
    return [
        {"optimizer": optimizer_one, "frequency": 5},
        {"optimizer": optimizer_two, "frequency": 10},
    ]

In this example, the first optimizer will be used for the first 5 steps, the second optimizer for the next 10 steps and that cycle will continue. If an LR scheduler is specified for an optimizer using the lr_scheduler key in the above dict, the scheduler will only be updated when its optimizer is being used.

Examples:

# most cases. no learning rate scheduler
def configure_optimizers(self):
    return Adam(self.parameters(), lr=1e-3)

# multiple optimizer case (e.g.: GAN)
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    return gen_opt, dis_opt

# example with learning rate schedulers
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    dis_sch = CosineAnnealing(dis_opt, T_max=10)
    return [gen_opt, dis_opt], [dis_sch]

# example with step-based learning rate schedulers
# each optimizer has its own scheduler
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    gen_sch = {
        'scheduler': ExponentialLR(gen_opt, 0.99),
        'interval': 'step'  # called after each training step
    }
    dis_sch = CosineAnnealing(dis_opt, T_max=10) # called every epoch
    return [gen_opt, dis_opt], [gen_sch, dis_sch]

# example with optimizer frequencies
# see training procedure in `Improved Training of Wasserstein GANs`, Algorithm 1
# https://arxiv.org/abs/1704.00028
def configure_optimizers(self):
    gen_opt = Adam(self.model_gen.parameters(), lr=0.01)
    dis_opt = Adam(self.model_dis.parameters(), lr=0.02)
    n_critic = 5
    return (
        {'optimizer': dis_opt, 'frequency': n_critic},
        {'optimizer': gen_opt, 'frequency': 1}
    )

Note

Some things to know:

  • Lightning calls .backward() and .step() on each optimizer and learning rate scheduler as needed.

  • If you use 16-bit precision (precision=16), Lightning will automatically handle the optimizers.

  • If you use multiple optimizers, training_step() will have an additional optimizer_idx parameter.

  • If you use torch.optim.LBFGS, Lightning handles the closure function automatically for you.

  • If you use multiple optimizers, gradients will be calculated only for the parameters of current optimizer at each training step.

  • If you need to control how often those optimizers step or override the default .step() schedule, override the optimizer_step() hook.

forecasting_loss(outputs: torch.Tensor, y_t: torch.Tensor, time_mask: int) Tuple[torch.Tensor][source]
forward(x_c: torch.Tensor, y_c: torch.Tensor, x_t: torch.Tensor, y_t: torch.Tensor, **forward_kwargs) Tuple[torch.Tensor][source]

Same as torch.nn.Module.forward().

Parameters
  • *args – Whatever you decide to pass into the forward method.

  • **kwargs – Keyword arguments are also possible.

Returns

Your model’s output

abstract forward_model_pass(x_c: torch.Tensor, y_c: torch.Tensor, x_t: torch.Tensor, y_t: torch.Tensor, **forward_kwargs) Tuple[torch.Tensor][source]
loss_fn(true: torch.Tensor, preds: torch.Tensor, mask: torch.Tensor) torch.Tensor[source]
predict(x_c: torch.Tensor, y_c: torch.Tensor, x_t: torch.Tensor, sample_preds: bool = False) torch.Tensor[source]
predict_step(batch, batch_idx)[source]

Step function called during predict(). By default, it calls forward(). Override to add any processing logic.

The predict_step() is used to scale inference on multi-devices.

To prevent an OOM error, it is possible to use BasePredictionWriter callback to write the predictions to disk or database after each batch or on epoch end.

The BasePredictionWriter should be used while using a spawn based accelerator. This happens for Trainer(strategy="ddp_spawn") or training on 8 TPU cores with Trainer(tpu_cores=8) as predictions won’t be returned.

Example

class MyModel(LightningModule):

    def predicts_step(self, batch, batch_idx, dataloader_idx):
        return self(batch)

dm = ...
model = MyModel()
trainer = Trainer(gpus=2)
predictions = trainer.predict(model, dm)
Parameters
  • batch – Current batch

  • batch_idx – Index of current batch

  • dataloader_idx – Index of the current dataloader

Returns

Predicted output

set_inv_scaler(scaler) None[source]
set_null_value(val: float) None[source]
set_scaler(scaler) None[source]
step(batch: Tuple[torch.Tensor], train: bool = False)[source]
test_step(batch, batch_idx)[source]

Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.

# the pseudocode for these calls
test_outs = []
for test_batch in test_data:
    out = test_step(test_batch)
    test_outs.append(out)
test_epoch_end(test_outs)
Parameters
  • batch (Tensor | (Tensor, …) | [Tensor, …]) – The output of your DataLoader. A tensor, tuple or list.

  • batch_idx (int) – The index of this batch.

  • dataloader_idx (int) – The index of the dataloader that produced this batch (only if multiple test dataloaders used).

Returns

Any of.

  • Any object or value

  • None - Testing will skip to the next batch

# if you have one test dataloader:
def test_step(self, batch, batch_idx):
    ...


# if you have multiple test dataloaders:
def test_step(self, batch, batch_idx, dataloader_idx):
    ...

Examples:

# CASE 1: A single test dataset
def test_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'test_loss': loss, 'test_acc': test_acc})

If you pass in multiple test dataloaders, test_step() will have an additional argument.

# CASE 2: multiple test dataloaders
def test_step(self, batch, batch_idx, dataloader_idx):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to test you don’t need to implement this method.

Note

When the test_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.

test_step_end(outs)[source]

Use this when testing with dp or ddp2 because test_step() will operate on only part of the batch. However, this is still optional and only needed for things like softmax or NCE loss.

Note

If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code.

# pseudocode
sub_batches = split_batches_for_dp(batch)
batch_parts_outputs = [test_step(sub_batch) for sub_batch in sub_batches]
test_step_end(batch_parts_outputs)
Parameters

batch_parts_outputs – What you return in test_step() for each batch part.

Returns

None or anything

# WITHOUT test_step_end
# if used in DP or DDP2, this batch is 1/num_gpus large
def test_step(self, batch, batch_idx):
    # batch is 1/num_gpus big
    x, y = batch

    out = self(x)
    loss = self.softmax(out)
    self.log("test_loss", loss)


# --------------
# with test_step_end to do softmax over the full batch
def test_step(self, batch, batch_idx):
    # batch is 1/num_gpus big
    x, y = batch

    out = self.encoder(x)
    return out


def test_step_end(self, output_results):
    # this out is now the full size of the batch
    all_test_step_outs = output_results.out
    loss = nce_loss(all_test_step_outs)
    self.log("test_loss", loss)

See also

See the advanced/multi_gpu:Multi-GPU training guide for more details.

training_step(batch, batch_idx)[source]

Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.

Parameters
Returns

Any of.

  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'

  • None - Training will skip to the next batch. This is only for automatic optimization.

    This is not supported for multi-GPU, TPU, IPU, or DeepSpeed.

In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.

Example:

def training_step(self, batch, batch_idx):
    x, y, z = batch
    out = self.encoder(x)
    loss = self.loss(out, x)
    return loss

If you define multiple optimizers, this step will be called with an additional optimizer_idx parameter.

# Multiple optimizers (e.g.: GANs)
def training_step(self, batch, batch_idx, optimizer_idx):
    if optimizer_idx == 0:
        # do training_step with encoder
        ...
    if optimizer_idx == 1:
        # do training_step with decoder
        ...

If you add truncated back propagation through time you will also get an additional argument with the hidden states of the previous step.

# Truncated back-propagation through time
def training_step(self, batch, batch_idx, hiddens):
    # hiddens are the hidden states from the previous truncated backprop step
    out, hiddens = self.lstm(data, hiddens)
    loss = ...
    return {"loss": loss, "hiddens": hiddens}

Note

The loss value shown in the progress bar is smoothed (averaged) over the last values, so it differs from the actual loss returned in train/validation step.

training_step_end(outs)[source]

Use this when training with dp or ddp2 because training_step() will operate on only part of the batch. However, this is still optional and only needed for things like softmax or NCE loss.

Note

If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code

# pseudocode
sub_batches = split_batches_for_dp(batch)
batch_parts_outputs = [training_step(sub_batch) for sub_batch in sub_batches]
training_step_end(batch_parts_outputs)
Parameters

batch_parts_outputs – What you return in training_step for each batch part.

Returns

Anything

When using dp/ddp2 distributed backends, only a portion of the batch is inside the training_step:

def training_step(self, batch, batch_idx):
    # batch is 1/num_gpus big
    x, y = batch

    out = self(x)

    # softmax uses only a portion of the batch in the denominator
    loss = self.softmax(out)
    loss = nce_loss(loss)
    return loss

If you wish to do something with all the parts of the batch, then use this method to do it:

def training_step(self, batch, batch_idx):
    # batch is 1/num_gpus big
    x, y = batch

    out = self.encoder(x)
    return {"pred": out}


def training_step_end(self, training_step_outputs):
    gpu_0_pred = training_step_outputs[0]["pred"]
    gpu_1_pred = training_step_outputs[1]["pred"]
    gpu_n_pred = training_step_outputs[n]["pred"]

    # this softmax now uses the full batch
    loss = nce_loss([gpu_0_pred, gpu_1_pred, gpu_n_pred])
    return loss

See also

See the advanced/multi_gpu:Multi-GPU training guide for more details.

validation_step(batch, batch_idx)[source]

Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.

# the pseudocode for these calls
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    val_outs.append(out)
validation_epoch_end(val_outs)
Parameters
  • batch (Tensor | (Tensor, …) | [Tensor, …]) – The output of your DataLoader. A tensor, tuple or list.

  • batch_idx (int) – The index of this batch

  • dataloader_idx (int) – The index of the dataloader that produced this batch (only if multiple val dataloaders used)

Returns

  • Any object or value

  • None - Validation will skip to the next batch

# pseudocode of order
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    if defined("validation_step_end"):
        out = validation_step_end(out)
    val_outs.append(out)
val_outs = validation_epoch_end(val_outs)
# if you have one val dataloader:
def validation_step(self, batch, batch_idx):
    ...


# if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx):
    ...

Examples:

# CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'val_loss': loss, 'val_acc': val_acc})

If you pass in multiple val dataloaders, validation_step() will have an additional argument.

# CASE 2: multiple validation dataloaders
def validation_step(self, batch, batch_idx, dataloader_idx):
    # dataloader_idx tells you which dataset this is.
    ...

Note

If you don’t need to validate you don’t need to implement this method.

Note

When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.

validation_step_end(outs)[source]

Use this when validating with dp or ddp2 because validation_step() will operate on only part of the batch. However, this is still optional and only needed for things like softmax or NCE loss.

Note

If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code.

# pseudocode
sub_batches = split_batches_for_dp(batch)
batch_parts_outputs = [validation_step(sub_batch) for sub_batch in sub_batches]
validation_step_end(batch_parts_outputs)
Parameters

batch_parts_outputs – What you return in validation_step() for each batch part.

Returns

None or anything

# WITHOUT validation_step_end
# if used in DP or DDP2, this batch is 1/num_gpus large
def validation_step(self, batch, batch_idx):
    # batch is 1/num_gpus big
    x, y = batch

    out = self.encoder(x)
    loss = self.softmax(out)
    loss = nce_loss(loss)
    self.log("val_loss", loss)


# --------------
# with validation_step_end to do softmax over the full batch
def validation_step(self, batch, batch_idx):
    # batch is 1/num_gpus big
    x, y = batch

    out = self(x)
    return out


def validation_step_end(self, val_step_outputs):
    for out in val_step_outputs:
        ...

See also

See the advanced/multi_gpu:Multi-GPU training guide for more details.

abstract property eval_step_forward_kwargs
abstract property train_step_forward_kwargs
training: bool
class spacetimeformer.plot.AttentionMatrixCallback(test_batches, layer=0, total_samples=32, raw_data_dir=None)[source]

Bases: pytorch_lightning.callbacks.base.Callback

on_validation_end(trainer, model)[source]

Called when the validation loop ends.

class spacetimeformer.plot.PredictionPlotterCallback(test_batches, total_samples=4, log_to_wandb=True)[source]

Bases: pytorch_lightning.callbacks.base.Callback

on_validation_end(trainer, model)[source]

Called when the validation loop ends.

spacetimeformer.plot.attn_plot(attn, title, tick_spacing=None)[source]
spacetimeformer.plot.plot(x_c, y_c, x_t, y_t, preds, conf=None)[source]
class spacetimeformer.time2vec.Time2Vec(input_dim=6, embed_dim=512, act_function=<built-in method sin of type object>)[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
spacetimeformer.train.create_callbacks(config)[source]
spacetimeformer.train.create_dset(config)[source]
spacetimeformer.train.create_model(config)[source]
spacetimeformer.train.create_parser()[source]
spacetimeformer.train.main(args)[source]