Skip to content

Latest commit

 

History

History

japanese-clip

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Japanese-CLIP

Input

Input

(Image from https://github.com/rinnakk/japanese-clip/blob/master/data/dog.jpeg)

Output

class_count=3
+ idx=0
  category=0[犬 ]
  prob=1.0
+ idx=1
  category=2[象 ]
  prob=0.0
+ idx=2
  category=1[猫 ]
  prob=0.0

Usage

Automatically downloads the onnx and prototxt files on the first run. It is necessary to be connected to the Internet while downloading.

For the sample image,

$ python3 japanese-clip.py

If you want to specify the input image, put the image path after the --input option.

$ python3 japanese-clip.py --input IMAGE_PATH

You can use --text option if you want to specify a subset of the texture labels to input into the model.
Default labels is "犬", "猫" and "象".

$ python3 japanese-clip.py --text "" --text "" --text ""

By adding the --model_type option, you can specify model type which is selected from "clip", "cloob". (default is clip)

$ python3 japanese-clip.py --model_type clip

Reference

Framework

Pytorch

Model Format

ONNX opset=11

Netron

CLIP-ViT-B16-image.onnx.prototxt
CLIP-ViT-B16-text.onnx.prototxt
CLOOB-ViT-B16-image.onnx.prototxt
CLOOB-ViT-B16-text.onnx.prototxt