Skip to content

creating a doc page with autodocs and everything #1909

creating a doc page with autodocs and everything

creating a doc page with autodocs and everything #1909

Triggered via pull request July 31, 2023 13:45
Status Failure
Total duration 34m 25s
Artifacts

ci.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Annotations

6 errors
test: flair/__init__.py#L1
mypy-status mypy exited with status 1.
test: flair/embeddings/image.py#L341
ruff pytest_ruff.RuffError: flair/embeddings/image.py:76:39: RUF015 [*] Prefer `next(iter(self.url2tensor_dict.values()))` over single element slice | 74 | self.url2tensor_dict = url2tensor_dict 75 | self.name = name 76 | self.__embedding_length = len(list(self.url2tensor_dict.values())[0]) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RUF015 77 | self.static_embeddings = True 78 | super().__init__() | = help: Replace with `next(iter(self.url2tensor_dict.values()))`
test: flair/models/multitask_model.py#L1
flair/models/multitask_model.py 119: error: Signature of "evaluate" incompatible with supertype "Classifier" [override] 119: note: Superclass: 119: note: def evaluate(self, data_points: Union[List[Any], Dataset[Any]], gold_label_type: str, out_path: Union[str, Path, None] = ..., embedding_storage_mode: str = ..., mini_batch_size: int = ..., main_evaluation_metric: Tuple[str, str] = ..., exclude_labels: List[str] = ..., gold_label_dictionary: Optional[Dictionary] = ..., return_loss: bool = ..., **kwargs: Any) -> Result 119: note: Subclass: 119: note: def evaluate(self, data_points: Any, gold_label_type: str, out_path: Union[str, Path, None] = ..., main_evaluation_metric: Tuple[str, str] = ..., evaluate_all: bool = ..., **evalargs: Any) -> Result 119: error: Signature of "evaluate" incompatible with supertype "Model" [override] 119: note: Superclass: 119: note: def evaluate(self, data_points: Union[List[Any], Dataset[Any]], gold_label_type: str, out_path: Union[str, Path, None] = ..., embedding_storage_mode: str = ..., mini_batch_size: int = ..., main_evaluation_metric: Tuple[str, str] = ..., exclude_labels: List[str] = ..., gold_label_dictionary: Optional[Dictionary] = ..., return_loss: bool = ..., **kwargs: Any) -> Result 119: note: Subclass: 119: note: def evaluate(self, data_points: Any, gold_label_type: str, out_path: Union[str, Path, None] = ..., main_evaluation_metric: Tuple[str, str] = ..., evaluate_all: bool = ..., **evalargs: Any) -> Result
test: flair/trainers/trainer.py#L1
flair/trainers/trainer.py 457: error: Item "None" of "Optional[FlairSampler]" has no attribute "set_dataset" [union-attr]
test: tests/test_datasets.py#L780
test_masakhane_corpus[False] AssertionError: Mismatch in number of sentences for fon@v2/dev assert 621 == 623
test
Process completed with exit code 1.