注意
here下载完整的示例代码
使用 TorchText 分类文本¶
本教程演示如何使用torchtext
中的文本分类数据集,包括
- AG_NEWS,
- SogouNews,
- DBpedia,
- YelpReviewPolarity,
- YelpReviewFull,
- YahooAnswers,
- AmazonReviewPolarity,
- AmazonReviewFull
这个示例演示如何使用其中的一个TextClassification
数据集训练用于分类的监督学习算法。
使用 ngram 加载数据¶
ngram 特征用于捕获局部单词顺序的信息。 在实践中,2-gram 或 3-gram 作为单词组比仅一个单词提供更多的好处。 例如:
"load data with ngrams"
Bi-grams results: "load data", "data with", "with ngrams"
Tri-grams results: "load data with", "data with ngrams"
TextClassification
数据集支持 ngrams 方法。 通过将 ngram 设置为 2,数据集中的示例文本将是一个单个单词加上 2-gram 字符串的列表。
import torch
import torchtext
from torchtext.datasets import text_classification
NGRAMS = 2
import os
if not os.path.isdir('./.data'):
os.mkdir('./.data')
train_dataset, test_dataset = text_classification.DATASETS['AG_NEWS'](
root='./.data', ngrams=NGRAMS, vocab=None)
BATCH_SIZE = 16
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
定义模型¶
该模型由 EmbeddingBag 层和线性层组成(参见下图)。 nn.EmbeddingBag
计算一"包"嵌入的平均值。 此处的文本条目具有不同的长度。 nn.EmbeddingBag
文本长度以偏移量保存,因此此处不需要填充。
此外,因为nn.EmbeddingBag
在动态的嵌入中累积平均值nn.EmbeddingBag
可以提高性能和内存效率,以处理一系列张量。
import torch.nn as nn
import torch.nn.functional as F
class TextSentiment(nn.Module):
def __init__(self, vocab_size, embed_dim, num_class):
super().__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)
self.fc = nn.Linear(embed_dim, num_class)
self.init_weights()
def init_weights(self):
initrange = 0.5
self.embedding.weight.data.uniform_(-initrange, initrange)
self.fc.weight.data.uniform_(-initrange, initrange)
self.fc.bias.data.zero_()
def forward(self, text, offsets):
embedded = self.embedding(text, offsets)
return self.fc(embedded)
初始化一个实例¶
AG_NEWS 数据集有四个标签,因此类别数为四个。
1 : World
2 : Sports
3 : Business
4 : Sci/Tec
词汇量大小等于词汇量的长度(包括单个单词和 ngram)。 类别数等于标签数,在 AG_NEWS 情况下为 4 个。
VOCAB_SIZE = len(train_dataset.get_vocab())
EMBED_DIM = 32
NUN_CLASS = len(train_dataset.get_labels())
model = TextSentiment(VOCAB_SIZE, EMBED_DIM, NUN_CLASS).to(device)
用于生成批处理的函数|
由于文本条目具有不同的长度,因此使用自定义函数 generate_batch() 生成数据批处理和偏移量。 该函数在torch.utils.data.DataLoader
中传递给collate_fn
。
collate_fn
的输入是大小为 batch_size 的张量列表,collate_fn
函数将它们打包成一个mini-batch。 请注意,确保collate_fn
声明为顶级 def。 这可确保该函数在每个 worker 中可用。
原始数据输入中的文本条目打包到列表中,并串联单个张量以作为nn.EmbeddingBag
的输入。
偏移量是表示文本张量中单个序列的开头索引的分隔符的张量。 标签是保存单个文本条目标签的张量。
def generate_batch(batch):
label = torch.tensor([entry[0] for entry in batch])
文本 = [entry[1] for entry in batch]
offsets = [0] + [len(entry) for entry in text]
# torch.Tensor.cumsum returns the cumulative sum
# of elements in the dimension dim.
# torch.Tensor([1.0, 2.0, 3.0]).cumsum(dim=0)
offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
文本 = torch.cat(text)
return text, offsets, label
定义用于训练模型和评估结果的函数。*
对于 PyTorch 用户,建议使用dataLoader,它使数据轻松并行加载(本教程在此处)。
我们使用DataLoader
来加载AG_NEWS数据集并将其发送到模型以进行训练/验证。
from torch.utils.data import DataLoader
def train_func(sub_train_):
# Train the model
train_loss = 0
train_acc = 0
data = DataLoader(sub_train_, batch_size=BATCH_SIZE, shuffle=True,
collate_fn=generate_batch)
for i, (text, offsets, cls) in enumerate(data):
optimizer.zero_grad()
text, offsets, cls = text.to(device), offsets.to(device), cls.to(device)
output = model(text, offsets)
loss = criterion(output, cls)
train_loss += loss.item()
loss.backward()
optimizer.step()
train_acc += (output.argmax(1) == cls).sum().item()
# Adjust the learning rate
scheduler.step()
return train_loss / len(sub_train_), train_acc / len(sub_train_)
def test(data_):
loss = 0
acc = 0
data = DataLoader(data_, batch_size=BATCH_SIZE, collate_fn=generate_batch)
for text, offsets, cls in data:
text, offsets, cls = text.to(device), offsets.to(device), cls.to(device)
with torch.no_grad():
output = model(text, offsets)
loss = criterion(output, cls)
loss += loss.item()
acc += (output.argmax(1) == cls).sum().item()
return loss / len(data_), acc / len(data_)
拆分数据集并运行模型¶
由于原始 AG_NEWS 没有验证数据集,因此我们将训练数据集拆分为训练/验证集,拆分比率为 0.95(训练)和 0.05(验证)。 这里我们使用 PyTorch 核心库中的 tototo.utils.data.data.random_split 函数。
CrossEntropyLoss 合并 nn.LogSoftmax() 和 nn.NLLLoss() 为一个单独的类。 当训练一个 C 个类别的分类问题时,它很有用。 SGD 实现随机梯度下降方法作为优化器。 初始学习速率设置为 4.0。 此处使用 StepLR 来调整每个 epoch 的学习速率。
import time
from torch.utils.data.dataset import random_split
N_EPOCHS = 5
min_valid_loss = float('inf')
criterion = torch.nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=4.0)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.9)
train_len = int(len(train_dataset) * 0.95)
sub_train_, sub_valid_ = \
random_split(train_dataset, [train_len, len(train_dataset) - train_len])
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train_func(sub_train_)
valid_loss, valid_acc = test(sub_valid_)
secs = int(time.time() - start_time)
mins = secs / 60
secs = secs % 60
print('Epoch: %d' %(epoch + 1), " | time in %d minutes, %d seconds" %(mins, secs))
print(f'\tLoss: {train_loss:.4f}(train)\t|\tAcc: {train_acc * 100:.1f}%(train)')
print(f'\tLoss: {valid_loss:.4f}(valid)\t|\tAcc: {valid_acc * 100:.1f}%(valid)')
在 GPU 上运行模型,得到以下信息:
Epoch: 1 | time in 0 minutes, 11 seconds
Loss: 0.0263(train) | Acc: 84.5%(train)
Loss: 0.0001(valid) | Acc: 89.0%(valid)
Epoch: 2 | time in 0 minutes, 10 seconds
Loss: 0.0119(train) | Acc: 93.6%(train)
Loss: 0.0000(valid) | Acc: 89.6%(valid)
Epoch: 3 | time in 0 minutes, 9 seconds
Loss: 0.0069(train) | Acc: 96.4%(train)
Loss: 0.0000(valid) | Acc: 90.5%(valid)
Epoch: 4 | time in 0 minutes, 11 seconds
Loss: 0.0038(train) | Acc: 98.2%(train)
Loss: 0.0000(valid) | Acc: 90.4%(valid)
Epoch: 5 | time in 0 minutes, 11 seconds
Loss: 0.0022(train) | Acc: 99.0%(train)
Loss: 0.0000(valid) | Acc: 91.0%(valid)
使用测试数据集评估模型|
print('Checking the results of test dataset...')
test_loss, test_acc = test(test_dataset)
print(f'\tLoss: {test_loss:.4f}(test)\t|\tAcc: {test_acc * 100:.1f}%(test)')
Checking the results of test dataset…
Loss: 0.0237(test) | Acc: 90.5%(test)
在一个随机新闻上测试¶
使用目前最好的模型,并测试高尔夫新闻。 标签信息可在此处获取。
import re
from torchtext.data.utils import ngrams_iterator
from torchtext.data.utils import get_tokenizer
ag_news_label = {1 : "World",
2 : "Sports",
3 : "Business",
4 : "Sci/Tec"}
def predict(text, model, vocab, ngrams):
tokenizer = get_tokenizer("basic_english")
with torch.no_grad():
文本 = torch.tensor([vocab[token]
for token in ngrams_iterator(tokenizer(text), ngrams)])
output = model(text, torch.tensor([0]))
return output.argmax(1).item() + 1
ex_text_str = "MEMPHIS, Tenn. – Four days ago, Jon Rahm was \
enduring the season’s worst weather conditions on Sunday at The \
Open on his way to a closing 75 at Royal Portrush, which \
considering the wind and the rain was a respectable showing. \
Thursday’s first round at the WGC-FedEx St. Jude Invitational \
was another story. With temperatures in the mid-80s and hardly any \
wind, the Spaniard was 13 strokes better in a flawless round. \
Thanks to his best putting performance on the PGA Tour, Rahm \
finished with an 8-under 62 for a three-stroke lead, which \
was even more impressive considering he’d never played the \
front nine at TPC Southwind."
vocab = train_dataset.get_vocab()
model = model.to("cpu")
print("This is a %s news" %ag_news_label[predict(ex_text_str, model, vocab, 2)])
这是体育新闻
你可以在此处找到这个笔记中显示的代码示例。
脚本总运行时间: (2分7.283秒)