Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update attention.py #217

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Update attention.py #217

wants to merge 1 commit into from

Conversation

klae01
Copy link

@klae01 klae01 commented Sep 8, 2022

Motivation and Context

pytorch x3d official example do not work in colab
https://pytorch.org/hub/facebookresearch_pytorchvideo_x3d/

How Has This Been Tested

import torch
# Choose the `x3d_s` model
model_name = 'x3d_s'
model = torch.hub.load('facebookresearch/pytorchvideo', model_name, pretrained=True)

before:

Downloading: "https://github.com/facebookresearch/pytorchvideo/zipball/main" to /root/.cache/torch/hub/main.zip
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
[<ipython-input-1-a2edf265ed61>](https://localhost:8080/#) in <module>
      2 # Choose the `x3d_s` model
      3 model_name = 'x3d_s'
----> 4 model = torch.hub.load('facebookresearch/pytorchvideo', model_name, pretrained=True)

10 frames
[/usr/local/lib/python3.7/dist-packages/torch/hub.py](https://localhost:8080/#) in load(repo_or_dir, model, source, trust_repo, force_reload, verbose, skip_validation, *args, **kwargs)
    538                                            verbose=verbose, skip_validation=skip_validation)
    539 
--> 540     model = _load_local(repo_or_dir, model, *args, **kwargs)
    541     return model
    542 

[/usr/local/lib/python3.7/dist-packages/torch/hub.py](https://localhost:8080/#) in _load_local(hubconf_dir, model, *args, **kwargs)
    564 
    565     hubconf_path = os.path.join(hubconf_dir, MODULE_HUBCONF)
--> 566     hub_module = _import_module(MODULE_HUBCONF, hubconf_path)
    567 
    568     entry = _load_entry_from_hubconf(hub_module, model)

[/usr/local/lib/python3.7/dist-packages/torch/hub.py](https://localhost:8080/#) in _import_module(name, path)
     87     module = importlib.util.module_from_spec(spec)
     88     assert isinstance(spec.loader, Loader)
---> 89     spec.loader.exec_module(module)
     90     return module
     91 

/usr/lib/python3.7/importlib/_bootstrap_external.py in exec_module(self, module)

/usr/lib/python3.7/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)

[~/.cache/torch/hub/facebookresearch_pytorchvideo_main/hubconf.py](https://localhost:8080/#) in <module>
      2 
      3 dependencies = ["torch"]
----> 4 from pytorchvideo.models.hub import (  # noqa: F401, E402
      5     c2d_r50,
      6     csn_r101,

[~/.cache/torch/hub/facebookresearch_pytorchvideo_main/pytorchvideo/models/__init__.py](https://localhost:8080/#) in <module>
      1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
      2 
----> 3 from .csn import create_csn
      4 from .head import create_res_basic_head, ResNetBasicHead
      5 from .masked_multistream import (

[~/.cache/torch/hub/facebookresearch_pytorchvideo_main/pytorchvideo/models/csn.py](https://localhost:8080/#) in <module>
      5 import torch
      6 import torch.nn as nn
----> 7 from pytorchvideo.models.head import create_res_basic_head
      8 from pytorchvideo.models.resnet import create_bottleneck_block, create_res_stage, Net
      9 from pytorchvideo.models.stem import create_res_basic_stem

[~/.cache/torch/hub/facebookresearch_pytorchvideo_main/pytorchvideo/models/head.py](https://localhost:8080/#) in <module>
      5 import torch
      6 import torch.nn as nn
----> 7 from pytorchvideo.layers.utils import set_attributes
      8 from torchvision.ops import RoIAlign
      9 

[~/.cache/torch/hub/facebookresearch_pytorchvideo_main/pytorchvideo/layers/__init__.py](https://localhost:8080/#) in <module>
      1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
      2 
----> 3 from .attention import Mlp, MultiScaleAttention, MultiScaleBlock
      4 from .attention_torchscript import ScriptableMultiScaleBlock
      5 from .drop_path import DropPath

[~/.cache/torch/hub/facebookresearch_pytorchvideo_main/pytorchvideo/layers/attention.py](https://localhost:8080/#) in <module>
     11 
     12 
---> 13 @torch.fx.wrap
     14 def _unsqueeze_dims_fx(tensor: torch.Tensor) -> Tuple[torch.Tensor, int]:
     15     tensor_dim = tensor.ndim

AttributeError: module 'torch' has no attribute 'fx'

after:

Using cache found in /root/.cache/torch/hub/facebookresearch_pytorchvideo_main
Downloading: "https://dl.fbaipublicfiles.com/pytorchvideo/model_zoo/kinetics/X3D_S.pyth" to /root/.cache/torch/hub/checkpoints/X3D_S.pyth
100%
29.4M/29.4M [00:01<00:00, 35.1MB/s]

Types of changes

  • Docs change / refactoring / dependency upgrade
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have read the CONTRIBUTING document.
  • I have completed my CLA (see CONTRIBUTING)
  • I have added tests to cover my changes.
  • All new and existing tests passed.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 8, 2022
@facebook-github-bot
Copy link
Contributor

@haooooooqi has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants