Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensemble #19

Open
wants to merge 72 commits into
base: master
Choose a base branch
from
Open

Ensemble #19

wants to merge 72 commits into from

Conversation

davek44
Copy link
Contributor

@davek44 davek44 commented Aug 4, 2018

No description provided.

Args:
data_ops: dict with keys 'sequence,' 'label,' and 'na.'
augment_rc: Boolean
augment_shifts: Int

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be 'augment_shift'

augment_rc: Boolean
augment_shifts: Int
Returns
data_ops: augmented data

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you document what fields get added to the dict?

return data_ops_list


def augment_deterministic(data_ops, augment_rc=False, augment_shift=0):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when would you use this function? It seems odd to call with augment_shift != 0, but only a single value.

if augment_rc:
data_ops_aug = augment_deterministic_rc(data_ops_aug)
else:
data_ops_aug['reverse_preds'] = tf.zeros((), dtype=tf.bool)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what are the semantics of these as targets?

target_subset=None):
augment_rc=False, augment_shifts=[0],
ensemble_rc=False, ensemble_shifts=[0],
penultimate=False, target_subset=None):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does penultimate mean here?

self.loss_train, self.loss_train_targets, self.targets_train = loss_returns

# optimizer
self.update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this be done after you make the optimizer?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants