Skip to content

Commit

Permalink
Small docstring fixes for the upcoming release (#1253)
Browse files Browse the repository at this point in the history
keras-io renders `Call arguments:` as a special keyword, but not
`Call args:`.
  • Loading branch information
mattdangerw authored Sep 21, 2023
1 parent 0b0b9d5 commit cecbeb2
Show file tree
Hide file tree
Showing 5 changed files with 6 additions and 6 deletions.
2 changes: 1 addition & 1 deletion keras_nlp/layers/modeling/position_embedding.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ class PositionEmbedding(keras.layers.Layer):
to `"glorot_uniform"`.
seq_axis: The axis of the input tensor where we add the embeddings.
Call args:
Call arguments:
inputs: The tensor inputs to compute an embedding for, with shape
`(batch_size, sequence_length, hidden_dim)`. Only the input shape
will be used, as the position embedding does not depend on the
Expand Down
4 changes: 2 additions & 2 deletions keras_nlp/layers/modeling/reversible_embedding.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ class ReversibleEmbedding(keras.layers.Embedding):
For stability, it is usually best to use full precision even when
working with half or mixed precision training.
Call args:
Call arguments:
inputs: The tensor inputs to the layer.
reverse: Boolean. If `True` the layer will perform a linear projection
from `output_dim` to `input_dim`, instead of a normal embedding
Expand All @@ -69,7 +69,7 @@ class ReversibleEmbedding(keras.layers.Embedding):
# Generate random inputs.
token_ids = np.random.randint(vocab_size, size=(batch_size, seq_length))
embedding = keras.layers.Embedding(vocab_size, hidden_dim)
embedding = keras_nlp.layers.ReversibleEmbedding(vocab_size, hidden_dim)
# Embed tokens to shape `(batch_size, seq_length, hidden_dim)`.
hidden_states = embedding(token_ids)
# Project hidden states to shape `(batch_size, seq_length, vocab_size)`.
Expand Down
2 changes: 1 addition & 1 deletion keras_nlp/layers/modeling/rotary_embedding.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ class RotaryEmbedding(keras.layers.Layer):
sequence_axis: int. Sequence axis in the input tensor.
feature_axis: int. Feature axis in the input tensor.
Call args:
Call arguments:
inputs: The tensor inputs to apply the embedding to. This can have
any shape, but must contain both a sequence and feature axis. The
rotary embedding will be applied to `inputs` and returned.
Expand Down
2 changes: 1 addition & 1 deletion keras_nlp/layers/modeling/sine_position_encoding.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ class SinePositionEncoding(keras.layers.Layer):
curves, as described in Attention is All You Need. Defaults to
`10000`.
Call args:
Call arguments:
inputs: The tensor inputs to compute an embedding for, with shape
`(batch_size, sequence_length, hidden_dim)`.
start_index: An integer or integer tensor. The starting position to
Expand Down
2 changes: 1 addition & 1 deletion keras_nlp/models/xlnet/xlnet_backbone.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ class XLNetBackbone(Backbone):
defaults to "zeros". The bias initializer for
the dense and multiheaded relative attention layers.
Call Args:
Call arguments:
token_ids: Indices of input sequence tokens in the vocabulary of shape
`[batch_size, sequence_length]`.
segment_ids: Segment token indices to indicate first and second portions
Expand Down

0 comments on commit cecbeb2

Please sign in to comment.