TEXT.vocab类的三个variables,可以返回我们需要的属性。
-
freqs
用来返回每一个单词和其对应的频数。 -
itos
按照下标的顺序返回每一个单词 -
stoi
返回每一个单词与其对应的下标
TEXT.build_vocab(train)
vocab = TEXT.vocab
vocab.freqs
>>
Counter({u'I': 1,
u'This': 2,
u'a': 1,
u'[awesome](http://d.hatena.ne.jp/keyword/awesome)': 1,
u'good': 1,
u'is': 1,
u'it': 1,
u'movie': 2,
u'really': 1,
u'regreted': 1,
u'to': 1,
u'was': 1,
u'[watch](http://d.hatena.ne.jp/keyword/watch)': 1})
print(TEXT.vocab.freqs.keys())
>>[u'house',
u'awesome',
u'at',
u'have',
u'in',
u'We',
u'movie',
u'cooking',
u'to',
u'too',
u'was',
u'is',
u'good',
u'chatting',
u'This',
u'big',
u'The',
u'My',
u'like',
u'dog',
u'mother',
u'my']
vocab.itos
>>['<unk>',
'<pad>',
u'This',
u'movie',
u'I',
u'a',
u'[awesome](http://d.hatena.ne.jp/keyword/awesome)',
u'good',
u'is',
u'it',
u'really',
u'regreted',
u'to',
u'was',
u'[watch]']
vocab.stoi
>>defaultdict(,
{'<pad>': 1,
'<unk>': 0,
u'I': 4,
u'This': 2,
u'a': 5,
u'[awesome]': 6,
u'good': 7,
u'is': 8,
u'it': 9,
u'movie': 3,
u'really': 10,
u'regreted': 11,
u'to': 12,
u'was': 13,
u'[watch]: 14})
构建词表的时候可以指定最小频次和最大size
TEXT.build_vocab(pos, min_freq=10,max_size=10000)
先来看看vocab类中的几个有意思的传入参数
-
specials
添加特殊词汇,比如开始符号sos
,结束符号eos
,默认是[‘<pad>’],并且不管你有没有手动加表示未登录词的<unk>
都会自动加入。
specials – The list of special tokens (e.g., padding or eos) that will be prepended to the vocabulary in addition to an <unk> token. Default: [‘<pad>’]
vectors
vectors = Vectors(name='myvector/glove/glove.6B.200d.txt')
TEXT.build_vocab(train, vectors=vectors)
# 更进一步的,可以在指定name的同时同时指定缓存文件所在目录,而不是使用默认的.vector_cache目录
cache = '.vector_cache'
if not os.path.exists(cache):
os.mkdir(cache)
vectors = Vectors(name='myvector/glove/glove.6B.200d.txt', cache=cache)
TEXT.build_vocab(train, vectors=vectors)
from torchtext.vocab import GloVe
text.build_vocab(train, vectors=GloVe(name='6B', dim=300))
label.build_vocab(train)
vectors – One of either the available pretrained vectors or custom pretrained vectors (see Vocab.load_vectors); or a list of aforementioned vectors
-
unk_init
表示的是对于未登录词的初始化方法,默认是使用全零进行初始化。
unk_init (callback) – by default, initialize out-of-vocabulary word vectors to zero vectors; can be any function that takes in a Tensor and returns a Tensor of the same size. Default: torch.Tensor.zero_
class torchtext.vocab.Vocab(counter, max_size=None, min_freq=1, specials=['<pad>'], vectors=None, unk_init=None, vectors_cache=None, specials_first=True)
def init_emb(vocab, init="randn", num_special_toks=2):
emb_vectors = vocab.vectors
sweep_range = len(vocab)
running_norm = 0.
num_non_zero = 0
total_words = 0
for i in range(num_special_toks, sweep_range):
if len(emb_vectors[i, :].nonzero()) == 0:
# std = 0.05 is based on the norm of average GloVE 100-dim word vectors
if init == "randn":
torch.nn.init.normal(emb_vectors[i], mean=0, std=0.05)
else:
num_non_zero += 1
running_norm += torch.norm(emb_vectors[i])
total_words += 1
logger.info("average GloVE norm is {}, number of known words are {}, total number of words are {}".format(
running_norm / num_non_zero, num_non_zero, total_words))
竟然还可以吧里面的函数单独拆下来
test_sent = TEXT.preprocess(test_sent)
[u'I', u'like', u'watching', u'movie']
test_idx = [[TEXT.vocab.stoi[x] for x in test_sent]]
[[7, 6, 0, 24]]
itos根据index输出的时候要+1,因为index 0默认表示的是未登录词。
out = model(x)
_, predicted = torch.max(out, 1)
LABEL.vocab.itos[predicted.data[0]]
['<unk>', u'1', u'-1']
#可以看到竟然把<unk>都输出了
LABEL.vocab.itos[predicted.data[0]+1]
对于一些特定的数据集可以指定特定的起始符和终结符,这样就不会把这两个放进未登录词,且知道什么是 一个句子了
TEXT = data.Field(init_token='<bos>', eos_token='<eos>')
LABEL = data.Field(init_token='<bos>', eos_token='<eos>')