mirror of
https://github.com/GokuMohandas/Made-With-ML.git
synced 2026-03-09 07:12:37 -05:00
Foundations -> Utilities Errors and questions #30
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @gitgithan on GitHub (Oct 17, 2021).
def predict_step,z = F.softmax(z).cpu().numpy()is shown on webpage. Notebook correctly assigns toy_prob = F.softmax(z).cpu().numpy()thoughplt.scatter(X[:, 0], X[:, 1], c=[colors[_y] for _y in y], s=25, edgecolors="k"')(happens 1x here, 2x in Data Quality page)def train_step,the raw logits were passed directly at
without a
apply_softmax = Truetrain_step's Loss needJ.detach().item()buteval_stepused J directly without detach and itemcollate_fn,batch = np.array(batch, dtype=object)was used but i didn't understand why convert to object. Adding a note on what happens without itVisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated.would be very helpful in preparing students for ragged tensors and padding in CNN/RNN laterX = torch.FloatTensor(X.astype(np.float32)breaks withValueError: setting an array element with a sequence.because batch[:,0] indexing creates nested numpy array objects that can't be casted, but this nested array thing will not occur for y during batch[:,1], because y begun as a 1d object already, so no nested array, so no problem casting, so there's no need to stack y? (same for CNN stacking y)This question came about when going through CNN and thinking why was there no X stacking there. Then I realized int casting worked there because
padded_sequences = np.zerosbegun without nesting, and also numpy was able to implicitly flatten thesequencenumpy array duringpadded_sequences[i][:len(sequence)] = sequence.@GokuMohandas commented on GitHub (Oct 18, 2021):
apply_softmaxflag all together and just usingzandy_probto differentiate between logits and probabilities (softmax applied to logits). I'll moved softmax outside of the forward pass. I started doing this in the MLOps lessons but haven't gone back and edited these yet or I replaced just a few of them. I'll at least update the webpage now since I'll be moving those directly into new notebooks this winter.