How does wandb.tensorboard.patch works?

Hi everyone,
I’m trying to move from tensorboard to wandb that seems more flexible and has some useful additional feature to customize plots.
In my code I defined two Tensorboard SummaryWriter; I’d like to import the plot associated to one of them in wandb.
In other words I want to include in wandb only a part of the tensorboard logs of my code.
Is it possible to do it?
If, for example, I have the 2 summaries stored in 2 different folders a and a/b

Sum1 = SummaryWriter('a')
Sum2 = SummaryWriter('a/b')

and I’m only interested in import Sum2 in wandb .
I thought that

wandb.tensorboard.patch(root_logdir='a/b', pytorch=True)

was the way but it doesn’t seem the case.
In fact I get this message:

e[34me[1mwandbe[0m: e[33mWARNINGe[0m Found logdirectory outside of given root_logdir, dropping given root_logdir for eventfile in a

I want that paths outside root_logdir to be ignored.
Apart from this page I did’t found a clear documentation about wandb.tensorboard.patch.
Any idea about how to proceed?

Hey,
I work for W&B, happy to try help here.

Judging from that error message, it seems it is trying to sync a directory outside of the root log dir. You may want to make sure the path is correct.

One thing to note, you must call either wandb.init or wandb.tensorboard.patch before calling tf.summary.create_file_writer or constructing aSummaryWriter via torch.utils.tensorboard .

Also, from the CLI, you can call wandb sync log_dir to sync your subdirectory. I’ll reach out to the engineering team internally or reproduce it myself and follow up with more information about whether it’s possible to patch a subdirectory as this could be an issue with our CLI.

You may want to post this as an issue on our client GitHub: GitHub - wandb/client: 🔥 A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.
or ask for help within our forum: https://community.wandb.ai/

2 Likes

Hi,
Thank you so much for your reply! I’ll report my doubut also on the client Git-Hub.
I’d like to use this occasion to ask you another question:
Let’s say that I have many log calls inside my code (each one register some measures)and I want to associate to each of them a different x-axis sequence for the dashboard chart.
I know that it is possible to do that by adding your customized step’s sequence as an additional measure in the same logs and then manually set it as x-axis on the dashboard.
Anyway I’d like to set the step (x-axis variable) automatically from the log call.
I tried with step argument, but since I want different customized sequences for different logs call this doesn’t work (because step is supposed to be an increasing sequence shared between all the calls) .
Is there a way to set a different x-axis sequence for different groups of logged measures in a automatic way (instead of doing it manually chart by chart on the dashboard)?

This is a good question. You can use define_metric to define a custom x-axis.

import wandb

wandb.init()
# define our custom x axis metric
wandb.define_metric("custom_step")
# define which metrics will be plotted against it
wandb.define_metric("validation_loss", step_metric="custom_step")

for i in range(10):
  log_dict = {
      "train_loss": 1/(i+1),
      "custom_step": i**2,
      "validation_loss": 1/(i+1)   
  }
  wandb.log(log_dict)
2 Likes