Error while setting dropout value in Efficenet pytroch from the torch vision library

When the run the following code I get this error

model = torchvision_models.efficientnet_b0(stochastic_depth_prob=0.8,num_classes=2,dropout=0.5)

TypeError                                 Traceback (most recent call last)
Input In [144], in <cell line: 1>()
----> 1 model = torchvision_models.efficientnet_b0(stochastic_depth_prob=0.8,num_classes=2,dropout=0.5)

File ~/miniconda3/lib/python3.9/site-packages/torchvision/models/_utils.py:142, in kwonly_to_pos_or_kw.<locals>.wrapper(*args, **kwargs)
    135     warnings.warn(
    136         f"Using {sequence_to_str(tuple(keyword_only_kwargs.keys()), separate_last='and ')} as positional "
    137         f"parameter(s) is deprecated since 0.13 and will be removed in 0.15. Please use keyword parameter(s) "
    138         f"instead."
    139     )
    140     kwargs.update(keyword_only_kwargs)
--> 142 return fn(*args, **kwargs)

File ~/miniconda3/lib/python3.9/site-packages/torchvision/models/_utils.py:228, in handle_legacy_interface.<locals>.outer_wrapper.<locals>.inner_wrapper(*args, **kwargs)
    225     del kwargs[pretrained_param]
    226     kwargs[weights_param] = default_weights_arg
--> 228 return builder(*args, **kwargs)

File ~/miniconda3/lib/python3.9/site-packages/torchvision/models/efficientnet.py:757, in efficientnet_b0(weights, progress, **kwargs)
    754 weights = EfficientNet_B0_Weights.verify(weights)
    756 inverted_residual_setting, last_channel = _efficientnet_conf("efficientnet_b0", width_mult=1.0, depth_mult=1.0)
--> 757 return _efficientnet(inverted_residual_setting, 0.2, last_channel, weights, progress, **kwargs)

TypeError: _efficientnet() got multiple values for argument 'dropout'

I checked the documentation(efficientnet_b0 — Torchvision main documentation, vision/efficientnet.py at main · pytorch/vision · GitHub) and this is what it states

    Args:
        inverted_residual_setting (Sequence[Union[MBConvConfig, FusedMBConvConfig]]): Network structure
        dropout (float): The droupout probability
        stochastic_depth_prob (float): The stochastic depth probability
        num_classes (int): Number of classes
        norm_layer (Optional[Callable[..., nn.Module]]): Module specifying the normalization layer to use
        last_channel (int): The number of channels on the penultimate layer
    """

Based on this line of code it seems the dropout value is hard-coded to 0.2:

return _efficientnet(inverted_residual_setting, 0.2, last_channel, weights, progress, **kwargs)

which is the reason the error is raised.

CC @pmeier for viz in case this is a mistake.

1 Like

Hey @akshay_23 and thanks for the report. I took the liberty and open an issue on TorchVision’s issue tracker “in your name”: EfficientNet documentation is (probably) misleading · Issue #7029 · pytorch/vision · GitHub.

2 Likes

Thanks @pmeier for opening the issue , I feel it could be a bug too and it requires a simple fix or they deliberately hard coded the values for some reason, will look into it

It was deemed a bug and was fixed in Allow dropout overwrites on EfficientNet by datumbox · Pull Request #7031 · pytorch/vision · GitHub. If you use the the nightly version of torchvision, you should have access to the fix in a few hours. Otherwise you’ll have to wait for the next release in Q1 of 2023.

2 Likes

sure thanks , currently I can set it using
model.classifier[0] = torch.nn.Dropout(p=self.dropout_p, inplace=True)
so not much of a hassle , I initially thought there was something wrong with the way I coded it but it has been clarified now , would check out the nightly version of torchvision.
Thanks for the quick response

1 Like