是否有更簡單的方式來設置dataloader,因爲在自動編碼器中輸入和目標數據相同,並且在訓練期間加載數據? DataLoader總是需要兩個輸入。如何在Pytorch中簡化用於自動編碼器的DataLoader
目前我定義我的DataLoader是這樣的:
X_train = rnd.random((300,100))
X_val = rnd.random((75,100))
train = data_utils.TensorDataset(torch.from_numpy(X_train).float(), torch.from_numpy(X_train).float())
val = data_utils.TensorDataset(torch.from_numpy(X_val).float(), torch.from_numpy(X_val).float())
train_loader= data_utils.DataLoader(train, batch_size=1)
val_loader = data_utils.DataLoader(val, batch_size=1)
,培養這樣的:
for epoch in range(50):
for batch_idx, (data, target) in enumerate(train_loader):
data, target = Variable(data), Variable(target).detach()
optimizer.zero_grad()
output = model(data, x)
loss = criterion(output, target)