【Pytorch实战】鸢尾花多分类

10,204 阅读5分钟

一、【问题概述】

本节的目的在于尝试利用Pytorch简单实现多分类,并没有过多的在意训练效果哈!

利用鸢尾花数据集做多分类,一共分为3类。将数据打乱后,选取80/150个作为训练集,剩余作为测试集。利用简单的三层神经网络,Relu作为激活函数。

训练过程,输入为80*4:

1)第一层神经网络有20个神经元,w[1]维度为4 * 20,第一层输出为80 * 20;

2)第2层神经网络有30个神经元,w[2]维度为20 * 30,第2层输出为80 * 30;

3)第三层神经网络有3个神经元,w[3]维度为30 * 3,第3层输出为80 * 3;

每个样本对应一个1 * 3的输出,损失函数选择nn.CrossEntropyLoss(),这里的交叉熵是nn.logSoftmax()和nn.NLLLoss()的整合,经过损失函数的处理,得到80 * 1的预测结果。

二、【知识点整理】

1.在mynet网络初始化的时候,定义一个变量用于定义网络,简单方式:

self.fc=nn.nn.Sequential(
            nn.Linear(4,20),
            nn.ReLU(),
            nn.Linear(20,30),
            nn.ReLU(),
            nn.Linear(30,3)  )

2.定义损失函数,本例选用nn.CrossEntropyLoss(),针对单目标分类问题, 结合了 nn.LogSoftmax() 和 nn.NLLLoss() 来计算 loss。

input=torch.rand(4,3)
so1=nn.Softmax(dim=1)#先softmax
input2=torch.log(so1(input))#再取log
target=torch.LongTensor([0,1,0,2])#设置labels
#nn.NLLLoss就是根据label取每个样本logsoftmax计算结果中相应位置值取负,相加求平均
print(-(input2[0][0]+input2[1][1]+input2[2][0]+input2[3][2])/4)#weight为空时,手动实现NLLLoss
#调用nn.NLLLoss验证
loss = nn.NLLLoss()
output = loss(input2, target)
print('output:',output)
结果:
tensor(0.9664)
output: tensor(0.9664)

如果采用了mse等其他损失函数,需要注意保证output、labels与损失函数要求的输入一致。

3.使用DataLoader迭代器

方便我们访问Dataset里的对象,值得注意的num_workers的参数设置:如果放在cpu上跑,可以不管,但是放在GPU上则需要设置为0;或者在DataLoader操作之后将Tensor放在GPU上,以最后一种方式放在gpu上代码如下:

train_dataset = Data.TensorDataset(torch.from_numpy(train).float(),
torch.from_numpy(train_label).long())#数据集
#每次迭代取10个样本,并且打乱顺序。假设有80个数据,每次10个,就会取8次,取遍所有数据。
train_loader = Data.DataLoader(dataset=train_dataset,
batch_size=10,shuffle=True)
#for和enumerate用来访问可遍历对象,即train_loader,常常配合使用
for i, (x,y) in enumerate(train_loader):
        print('i:', i)
        if torch.cuda.is_available():#数据放在gpu上
            x = x.cuda()
            y = y.cuda()

4.定义正向传播函数

def forward(self,inputs):
        outputs=self.fc(inputs)
        return outputs

5.定义梯度下降优化器:

#选取优化器
self.optim=torch.optim.Adam(params=self.parameters(),lr=0.1)
#在每次train中,根据前向传播计算loss,再利用loss进行梯度计算与参数更新
def train():
....
    loss=self.mse(out,label) #根据正向传播计算损失
        self.optim.zero_grad()#优化器梯度清零
        loss.backward()#反向传播,计算梯度
        self.optim.step()#应用梯度,更新参数

三、【源代码】

利用sklearn中自带的数据集鸢尾花作为原始数据。看代码:

import torch.nn as nn
import torch
import torch.utils.data as Data
def getdata():
    from  sklearn.datasets import load_iris
    import pandas as pd
    import numpy as np
    train_data=load_iris()
    data=train_data['data']
    labels=train_data['target'].reshape(-1,1)
    total_data=np.hstack((data,labels))
    np.random.shuffle(total_data)
    train=total_data[0:80,:-1]
    test=total_data[80:,:-1]
    train_label=total_data[0:80,-1].reshape(-1,1)
    test_label=total_data[80:,-1].reshape(-1,1)
    return data,labels,train,test,train_label,test_label
    
#网络类
class mynet(nn.Module):
    def __init__(self):
        super(mynet,self).__init__()
        self.fc=nn.Sequential( #添加神经元以及激活函数
            nn.Linear(4,20),
            nn.ReLU(),
            nn.Linear(20,30),
            nn.ReLU(),
            nn.Linear(30,3)
        )
        self.mse=nn.CrossEntropyLoss()
        self.optim=torch.optim.Adam(params=self.parameters(),lr=0.1)
        
    def forward(self,inputs):
        outputs=self.fc(inputs)
        return outputs
    
    def train(self,x,label):
        out=self.forward(x) #正向传播
        loss=self.mse(out,label) #根据正向传播计算损失
        self.optim.zero_grad()#梯度清零
        loss.backward()#计算梯度
        self.optim.step()#应用梯度更新参数
        
    def test(self,test_):
        return self.fc(test_)
        
if __name__=='__main__' :
    data,labels,train,test,train_label,test_label=getdata()
    mynet=mynet()
    train_dataset = Data.TensorDataset(torch.from_numpy(train).float(),torch.from_numpy(train_label).long())
    BATCH_SIZE=10
    train_loader = Data.DataLoader(dataset=train_dataset,batch_size=BATCH_SIZE,shuffle=True)
    for epoch in range(100):
        for step,(x,y) in enumerate(train_loader):
            y=torch.reshape(y,[BATCH_SIZE])
            mynet.train(x,y)
            if epoch%20==0:
                print('Epoch: ', epoch, '| Step: ', step, '| batch y: ', y.numpy())
    out=mynet.test(torch.from_numpy(data).float())
    prediction = torch.max(out, 1)[1]# 1返回index  0返回原值
    pred_y = prediction.data.numpy()
    test_y=labels.reshape(1,-1)
    target_y =torch.from_numpy(test_y).long().data.numpy()
    accuracy = float((pred_y == target_y).astype(int).sum()) / float(target_y.size)
    print("莺尾花预测准确率",accuracy)

结果:

Epoch:  0 | Step:  0 | batch y:  [0 2 2 0 1 2 0 0 0 2]
Epoch:  0 | Step:  1 | batch y:  [1 2 0 1 1 1 0 0 1 2]
Epoch:  0 | Step:  2 | batch y:  [1 2 1 1 0 1 2 2 1 1]
Epoch:  0 | Step:  3 | batch y:  [0 2 2 1 1 0 0 1 2 1]
Epoch:  0 | Step:  4 | batch y:  [2 0 0 1 0 2 2 0 0 2]
Epoch:  0 | Step:  5 | batch y:  [2 0 2 0 2 2 1 1 1 1]
Epoch:  0 | Step:  6 | batch y:  [1 0 2 0 2 2 2 2 0 1]
Epoch:  0 | Step:  7 | batch y:  [2 0 0 0 1 2 0 2 0 1]
Epoch:  20 | Step:  0 | batch y:  [0 0 1 2 2 2 1 0 2 2]
Epoch:  20 | Step:  1 | batch y:  [0 1 2 2 2 0 0 0 2 1]
Epoch:  20 | Step:  2 | batch y:  [2 1 1 0 1 1 2 1 2 1]
Epoch:  20 | Step:  3 | batch y:  [0 2 0 1 0 1 2 0 1 2]
Epoch:  20 | Step:  4 | batch y:  [0 2 2 1 2 1 2 2 0 1]
Epoch:  20 | Step:  5 | batch y:  [0 0 1 0 2 0 1 2 2 1]
Epoch:  20 | Step:  6 | batch y:  [0 2 0 0 0 1 1 0 2 2]
Epoch:  20 | Step:  7 | batch y:  [0 0 2 1 1 1 1 2 0 0]
Epoch:  40 | Step:  0 | batch y:  [2 1 0 0 1 2 2 1 1 0]
Epoch:  40 | Step:  1 | batch y:  [1 1 2 1 2 2 0 0 2 2]
Epoch:  40 | Step:  2 | batch y:  [1 1 1 0 0 1 0 1 1 2]
Epoch:  40 | Step:  3 | batch y:  [0 2 0 2 2 2 0 0 1 1]
Epoch:  40 | Step:  4 | batch y:  [2 0 2 2 1 2 1 2 1 0]
Epoch:  40 | Step:  5 | batch y:  [0 1 0 1 0 0 2 0 2 1]
Epoch:  40 | Step:  6 | batch y:  [0 1 2 0 1 2 2 0 2 0]
Epoch:  40 | Step:  7 | batch y:  [0 0 2 2 0 1 2 2 0 1]
Epoch:  60 | Step:  0 | batch y:  [2 2 2 1 1 1 0 0 2 0]
Epoch:  60 | Step:  1 | batch y:  [0 2 2 2 0 0 1 1 1 2]
Epoch:  60 | Step:  2 | batch y:  [2 1 0 0 1 0 1 1 1 1]
Epoch:  60 | Step:  3 | batch y:  [2 0 2 1 1 0 1 0 1 2]
Epoch:  60 | Step:  4 | batch y:  [2 2 1 1 0 0 2 2 2 2]
Epoch:  60 | Step:  5 | batch y:  [0 1 2 2 0 2 2 1 0 2]
Epoch:  60 | Step:  6 | batch y:  [1 2 0 1 1 0 0 0 0 2]
Epoch:  60 | Step:  7 | batch y:  [0 2 0 2 0 0 0 2 1 1]
Epoch:  80 | Step:  0 | batch y:  [0 2 2 0 0 1 0 0 1 0]
Epoch:  80 | Step:  1 | batch y:  [0 2 1 1 2 1 2 0 1 1]
Epoch:  80 | Step:  2 | batch y:  [0 0 2 0 1 1 0 2 2 1]
Epoch:  80 | Step:  3 | batch y:  [2 2 2 0 2 1 0 1 2 0]
Epoch:  80 | Step:  4 | batch y:  [0 2 2 1 2 0 0 0 1 0]
Epoch:  80 | Step:  5 | batch y:  [1 2 1 0 2 2 1 0 2 2]
Epoch:  80 | Step:  6 | batch y:  [0 2 1 1 1 2 1 2 2 2]
Epoch:  80 | Step:  7 | batch y:  [1 2 0 0 0 2 0 1 1 1]
莺尾花预测准确率 0.9266666666666666