diff --git a/Lecture/l3.ipynb b/Lecture/l3.ipynb index 8f855cb..03e279a 100644 --- a/Lecture/l3.ipynb +++ b/Lecture/l3.ipynb @@ -4,241 +4,366 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "在上一节中,我们简单介绍了nanoGPT的基本方式,但是我们也能看出这个GPT过于简陋,其生成效果急需进一步的提高\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 首先是编码,嵌入方式的改进\n", - "- 之前我们采用的是简单的一一对应的方式\n", - " - 对于只有26个大写和26个小写的英文字符来说,这样似乎还算合理,因为只是把50多个或者60多个字符按照顺序去编码为对应的数字而已" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "- 例如,OpenAI 在之前的GPT-2,GPT-3,GPT-4系列中使用的是其\n", - "发布的 tiktoken 库,而 Google 也有其自己的分词/编码工具 SentencePiece,他们只是\n", - "不同的方式,但做的都是将“完整的句子转化成整数向量编码”这样的一件事情。例\n", - "如,我们现在可以利用 tiktoken 库来十分方便地调用 GPT-2 中所训练的 tokenizer,从\n", - "而实现编解码过程" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[37046, 21043, 10890, 247, 162, 224, 253, 35894]\n", - "我是孙悟空\n" - ] - } - ], - "source": [ - "# Way2\n", - "import tiktoken\n", - "enc = tiktoken.get_encoding(\"cl100k_base\")\n", - "# enc = tiktoken.get_encoding(\"gpt2\")\n", - "print(enc.encode(\"我是孙悟空\"))\n", - "print(enc.decode(enc.encode(\"我是孙悟空\")))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "之前经过encoder之后的数量不会改变" + "- 在上一节中,我们简单地实现了一个十分十分基础甚至有些简陋的GPT,同时起生成效果看起来也有很大的提升空间\n", + "- 这一节中,我们将对通过一系列的推导来向大家引入可以增强性能的自注意力机制\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, + "outputs": [], + "source": [ + "import torch\n", + "import torch.nn as nn\n", + "import torch.nn.functional as F" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 自注意力机制怎么增强性能\n", + "\n", + "在此之前,nn.Embedding致力于将现有的编码转化为其对应的下一位的编码\n", + "\n", + "但是一个很重要的点是其忽略了现有的编码中彼此之间的联系,\n", + "\n", + "如果可以利用好这份联系,使得每个字的编码可以**相互通信**,是从能产生更好的性能呢" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "489540" + "torch.Size([4, 8, 2])" ] }, - "execution_count": 2, + "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "# 还是跟之前一样,我们先进行文本的读取\n", - "with open('../data/Xiyou.txt', 'r', encoding='utf-8') as f:\n", - " text = f.read()\n", - "\n", - "n = int(0.5*len(text)) # 前90%都是训练集,后10%都是测试集\n", - "text = text[:n]\n", - "\n", - " # 对文本进行编码\n", - "len(enc.encode(text)) # 获取编码之后的长度" + "# 此时我们以一个真实的情况为例,通过随机生成一些数据来代表当前的真实情况\n", + "torch.manual_seed(42) # 设置固定的种子,使得结果可以及时的复现\n", + "B,T,C = 4,8,2 # batch, time, channels\n", + "x = torch.randn(B,T,C)\n", + "x.shape" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- 最简单的通信方式,第五个编码可以很简单地与收到前面四个编码的平均影响,虽然这种通信的方式听起来也十分地弱" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([[1.9269, 1.4873]])\n", + "这是上面的均值累加:\n", + "tensor([1.9269, 1.4873])\n", + "----------------------\n", + "tensor([[ 1.9269, 1.4873],\n", + " [ 0.9007, -2.1055]])\n", + "这是上面的均值累加:\n", + "tensor([ 1.4138, -0.3091])\n", + "----------------------\n", + "tensor([[ 1.9269, 1.4873],\n", + " [ 0.9007, -2.1055],\n", + " [ 0.6784, -1.2345]])\n", + "这是上面的均值累加:\n", + "tensor([ 1.1687, -0.6176])\n", + "----------------------\n", + "tensor([[ 1.9269, 1.4873],\n", + " [ 0.9007, -2.1055],\n", + " [ 0.6784, -1.2345],\n", + " [-0.0431, -1.6047]])\n", + "这是上面的均值累加:\n", + "tensor([ 0.8657, -0.8644])\n", + "----------------------\n", + "tensor([[ 1.9269, 1.4873],\n", + " [ 0.9007, -2.1055],\n", + " [ 0.6784, -1.2345],\n", + " [-0.0431, -1.6047],\n", + " [-0.7521, 1.6487]])\n", + "这是上面的均值累加:\n", + "tensor([ 0.5422, -0.3617])\n", + "----------------------\n", + "tensor([[ 1.9269, 1.4873],\n", + " [ 0.9007, -2.1055],\n", + " [ 0.6784, -1.2345],\n", + " [-0.0431, -1.6047],\n", + " [-0.7521, 1.6487],\n", + " [-0.3925, -1.4036]])\n", + "这是上面的均值累加:\n", + "tensor([ 0.3864, -0.5354])\n", + "----------------------\n", + "tensor([[ 1.9269, 1.4873],\n", + " [ 0.9007, -2.1055],\n", + " [ 0.6784, -1.2345],\n", + " [-0.0431, -1.6047],\n", + " [-0.7521, 1.6487],\n", + " [-0.3925, -1.4036],\n", + " [-0.7279, -0.5594]])\n", + "这是上面的均值累加:\n", + "tensor([ 0.2272, -0.5388])\n", + "----------------------\n", + "tensor([[ 1.9269, 1.4873],\n", + " [ 0.9007, -2.1055],\n", + " [ 0.6784, -1.2345],\n", + " [-0.0431, -1.6047],\n", + " [-0.7521, 1.6487],\n", + " [-0.3925, -1.4036],\n", + " [-0.7279, -0.5594],\n", + " [-0.7688, 0.7624]])\n", + "这是上面的均值累加:\n", + "tensor([ 0.1027, -0.3762])\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n", + "这是上面的均值累加:\n", + "----------------------\n" + ] + } + ], + "source": [ + "# 第一种方式:为了循环和聚合,我们使用torch.mean函数进行操作\n", + "\n", + "xbow = torch.zeros((B,T,C))\n", + "\n", + "for b in range(B): # 遍历所有的batch\n", + " for t in range(T):\n", + " # 使用切片操作x[b,:t+1]来获取x在第b个批次中前t+1个时间步的所有元素,得到一个形状为(t,C)的张量xprev。\n", + " xprev = x[b,:t+1] # (t,C)\n", + " # 使用torch.mean(xprev, 0)来计算xprev在第一个维度(dim=0)上的平均值。这个操作会返回一个形状为(C,)的张量,它的每个元素是xprev在对应列上的元素的平均值。\n", + " xbow[b,t] = torch.mean(xprev, 0) \n", + " \n", + " # print(b) if b==0 else None\n", + " # print(t) if b==0 else None\n", + " print(xprev) if b==0 else None # 前面的累加\n", + " print('这是上面的均值累加:')\n", + " print(xbow[b,t]) if b==0 else None\n", + " print(\"----------------------\")\n", + "\n", + "# 虽然这样是可以行的,但是这里的实现方式明显可以进一步改进" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "a=\n", + "tensor([[1.0000, 0.0000, 0.0000],\n", + " [0.5000, 0.5000, 0.0000],\n", + " [0.3333, 0.3333, 0.3333]])\n", + "--\n", + "b=\n", + "tensor([[0., 1.],\n", + " [3., 0.],\n", + " [1., 1.]])\n", + "--\n", + "c=\n", + "tensor([[0.0000, 1.0000],\n", + " [1.5000, 0.5000],\n", + " [1.3333, 0.6667]])\n" + ] + } + ], + "source": [ + "# 方法2:使用矩阵乘法\n", + "\n", + "a = torch.tril(torch.ones(3, 3)) # 创建一个下三角的矩阵的函数\n", + "# torch.sum 计算张量a在每一横行上的值,\n", + "a = a / torch.sum(a, 1, keepdim=True) # ! 这一步相当于是在\n", + "b = torch.randint(0,10,(3,2)).float()\n", + "c = a @ b\n", + "print('a=')\n", + "print(a)\n", + "print('--')\n", + "print('b=')\n", + "print(b)\n", + "print('--')\n", + "print('c=')\n", + "print(c)\n", + "\n", + "# 在这个过程中,实现了字符根据权重得到最终的结果" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "* 这个时候可以揭晓,我们所想要平均的一直是**权重**" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "True" + ] + }, + "execution_count": 24, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "weight = torch.tril(torch.ones(T, T))\n", + "weight = weight / weight.sum(1, keepdim=True) # 直接是在第一个维度上进行加和\n", + "weight\n", + "# 而在这个例子中的b,其实是x\n", + "xbow2 = weight @ x # (B,T,T) @ (B,T,C) ------> (B,T,C)\n", + "torch.allclose(xbow,xbow2) # 这个是用于检测两个张量是否在一定的容忍度内是相等的" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "结果为`True`,所以说明这样几行就解决了上面这个循环要做的事情" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "* 然而,这里还有一种更为巧妙的方式可以实现" + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "True" + ] + }, + "execution_count": 33, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "\n", + "# 第三种方式:使用softmax\n", + "trils = torch.tril(torch.ones(T,T))\n", + "\n", + "weight = torch.zeros((T,T)) # 构造一个全为0的向量\n", + "weight = weight.masked_fill(trils == 0,float('-inf')) # 使所有tril为0的位置都变为无穷大\n", + "# 然后,我们选择在每行的维度上去使用sotfmax,\n", + "weight = F.softmax(weight,dim=-1)\n", + "\n", + "xbow3 = weight @ x\n", + "\n", + "torch.allclose(xbow,xbow3) # 这个是用于检测两个张量是否在一定的容忍度内是相等的\n" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "tensor([[1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n", + " [0.5000, 0.5000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n", + " [0.3333, 0.3333, 0.3333, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n", + " [0.2500, 0.2500, 0.2500, 0.2500, 0.0000, 0.0000, 0.0000, 0.0000],\n", + " [0.2000, 0.2000, 0.2000, 0.2000, 0.2000, 0.0000, 0.0000, 0.0000],\n", + " [0.1667, 0.1667, 0.1667, 0.1667, 0.1667, 0.1667, 0.0000, 0.0000],\n", + " [0.1429, 0.1429, 0.1429, 0.1429, 0.1429, 0.1429, 0.1429, 0.0000],\n", + " [0.1250, 0.1250, 0.1250, 0.1250, 0.1250, 0.1250, 0.1250, 0.1250]])" + ] + }, + "execution_count": 32, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [] + }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "接下来我们检查一下这样是否work、以及这样是否可以提升性能" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": {}, - "outputs": [], - "source": [ - "import torch\n", - "import torch.nn.functional as F\n", - "import torch.nn as nn\n", - "\n", - "\n", - "# 开始划分训练集和测试集\n", - "code = enc.encode(text)\n", - "data = torch.tensor(code, dtype=torch.long) # Way 1 \n", - "\n", - "vocab_size = len(code)\n", - "n = int(0.9*len(data)) # 前90%都是训练集,后10%都是测试集\n", - "train_data = data[:n]\n", - "val_data = data[n:]\n", - "\n", - "# 进行参数的设置\n", - "device = 'cpu' # 模型运行的设备\n", - "block_size = 16 # 每个单元的最大长度\n", - "batch_size = 32 # 同时运行的批次大小\n", - "learning_rate = 0.3\n", - "max_iters = 1000\n", - "eval_interval = 300 # 对当前模型的运行结果进行评估的epoch数量\n", - "eval_iters = 200\n", - "\n", - "# 每次从数据集中获取x和y\n", - "def get_batch(split):\n", - " # generate a small batch of data of inputs x and targets y\n", - " data = train_data if split == 'train' else val_data\n", - " ix = torch.randint(len(data) - block_size, (batch_size,))\n", - " x = torch.stack([data[i:i+block_size] for i in ix])\n", - " y = torch.stack([data[i+1:i+block_size+1] for i in ix])\n", - " return x, y\n", - "\n", - "class BLM(nn.Module):\n", - " def __init__(self,vocab_size):\n", - " super().__init__()\n", - " self.token_embedding_table = nn.Embedding(vocab_size,vocab_size)\n", - " \n", - " def forward(self,idx,targets = None):\n", - " # 这里的self,就是我们之前的x,target就是之前的y\n", - " logits = self.token_embedding_table(idx) # (B,T) -> (B,T,C) # 这里我们通过Embedding操作直接得到预测分数\n", - " # 这里的预测分数过程与二分类或者多分类的分数是大致相同的\n", - "\n", - " \n", - " if targets is None:\n", - " loss = None\n", - " else: \n", - " B, T, C = logits.shape\n", - " logits = logits.view(B*T, C)\n", - " targets = targets.view(B*T) # 这里我们调整一下形状,以符合torch的交叉熵损失函数对于输入的变量的要求\n", - " loss = F.cross_entropy(logits, targets)\n", - "\n", - " return logits, loss\n", - "\n", - " def generate(self, idx, max_new_tokens):\n", - " '''\n", - " idx 是现在的输入的(B, T) array of indices in the current context\n", - " max_new_tokens 是产生的最大的tokens数量\n", - " '''\n", - "\n", - " for _ in range(max_new_tokens):\n", - " # 得到预测的结果\n", - " logits, loss = self(idx)\n", - " \n", - " # 只关注最后一个的预测\n", - " logits = logits[:, -1, :] # becomes (B, C)\n", - " # 对概率值应用softmax\n", - " probs = F.softmax(logits, dim=-1) # (B, C)\n", - " # 对input的每一行做n_samples次取值,输出的张量是每一次取值时input张量对应行的下标,也即找到概率值输出最大的下标,也对应着最大的编码\n", - " idx_next = torch.multinomial(probs, num_samples=1) # (B, 1)\n", - " # 将新产生的编码加入到之前的编码中,形成新的编码\n", - " idx = torch.cat((idx, idx_next), dim=1) # (B, T+1)\n", - " return idx " - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": {}, - "outputs": [ - { - "ename": "", - "evalue": "", - "output_type": "error", - "traceback": [ - "\u001b[1;31m在当前单元格或上一个单元格中执行代码时 Kernel 崩溃。\n", - "\u001b[1;31m请查看单元格中的代码,以确定故障的可能原因。\n", - "\u001b[1;31m单击此处了解详细信息。\n", - "\u001b[1;31m有关更多详细信息,请查看 Jupyter log。" - ] - } - ], - "source": [ - "# 创建模型\n", - "model = BLM(vocab_size)\n", - "m = model.to(device)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# 简单的写下\n", - "optimizer = torch.optim.AdamW(m.parameters(), lr=learning_rate)\n", - "for steps in range(1000): # 随着迭代次数的增长,获得的效果会不断变好\n", - "\n", - " xb, yb = get_batch('train')\n", - "\n", - " logits, loss = m(xb, yb) # 用于评估损失\n", - " optimizer.zero_grad(set_to_none=True)\n", - " loss.backward()\n", - " optimizer.step()\n", - " print(epoch)\n", - " print(loss.item())\n", - "\n", - "print(loss.item())\n", - "\n", - "optimizer.step()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(enc.decode(m.generate(idx = torch.zeros((1, 1), dtype=torch.long), max_new_tokens=500)[0].tolist()))" - ] } ], "metadata": { "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": "pytorch", "language": "python", "name": "python3" }, @@ -252,7 +377,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.18" + "version": "3.9.12" } }, "nbformat": 4, diff --git a/Lecture/l4.ipynb b/Lecture/l4.ipynb new file mode 100644 index 0000000..7331ba8 --- /dev/null +++ b/Lecture/l4.ipynb @@ -0,0 +1,100 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- 在上一讲中,我们展示了几种实现简易平均加权的方式\n", + " - 在这里,做一些一些准备工作,从而实现后续搭建自注意力模块" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import torch\n", + "import torch.nn as nn\n", + "import torch.nn.functional as F" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "class BLM(nn.Module):\n", + " def __init__(self,vocab_size):\n", + " super().__init__()\n", + " self.token_embedding_table = nn.Embedding(vocab_size,vocab_size)\n", + " \n", + " def forward(self,idx,targets = None):\n", + "\n", + " logits = self.token_embedding_table(idx) # (B,T) -> (B,T,C) # 这里我们通过Embedding操作直接得到预测分数\n", + " # 这里的预测分数过程与二分类或者多分类的分数是大致相同的\n", + "\n", + " \n", + " if targets is None:\n", + " loss = None\n", + " else: \n", + " B, T, C = logits.shape\n", + " logits = logits.view(B*T, C)\n", + " targets = targets.view(B*T) # 这里我们调整一下形状,以符合torch的交叉熵损失函数对于输入的变量的要求\n", + " loss = F.cross_entropy(logits, targets)\n", + "\n", + " return logits, loss\n", + "\n", + " def generate(self, idx, max_new_tokens):\n", + " '''\n", + " idx 是现在的输入的(B, T)序列 ,这是之前我们提取的batch的下标\n", + " max_new_tokens 是产生的最大的tokens数量\n", + " '''\n", + "\n", + " for _ in range(max_new_tokens):\n", + " \n", + " # 得到预测的结果\n", + " logits,_ = self(idx) # _ 表示省略,用于不获取相对应的函数返回值\n", + " \n", + " # 只关注最后一个的预测 (B,T,C)\n", + " logits = logits[:, -1, :] # becomes (B, C)\n", + " # 对概率值应用softmax\n", + " probs = F.softmax(logits, dim=-1) # (B, C)\n", + " # nn.argmax\n", + " # 对input的每一行做n_samples次取值,输出的张量是每一次取值时input张量对应行的下标,也即找到概率值输出最大的下标,也对应着最大的编码\n", + " idx_next = torch.multinomial(probs, num_samples=1) # (B, 1)\n", + " # 将新产生的编码加入到之前的编码中,形成新的编码\n", + " idx = torch.cat((idx, idx_next), dim=1) # (B, T+1)\n", + "\n", + " return idx " + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "pytorch", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.12" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/Lecture/l4.py b/Lecture/l4.py new file mode 100644 index 0000000..ee31de5 --- /dev/null +++ b/Lecture/l4.py @@ -0,0 +1,120 @@ +import torch +import torch.nn as nn +from torch.nn import functional as F + +# hyperparameters +batch_size = 32 # how many independent sequences will we process in parallel? +block_size = 8 # what is the maximum context length for predictions? +max_iters = 3000 +eval_interval = 300 +learning_rate = 1e-2 +device = 'cuda' if torch.cuda.is_available() else 'cpu' +eval_iters = 200 +# ------------ + +torch.manual_seed(1337) + +# wget https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt +with open('../data/Xiyou.txt', 'r', encoding='utf-8') as f: + text = f.read() + +# here are all the unique characters that occur in this text +chars = sorted(list(set(text))) +vocab_size = len(chars) +# create a mapping from characters to integers +stoi = { ch:i for i,ch in enumerate(chars) } +itos = { i:ch for i,ch in enumerate(chars) } +encode = lambda s: [stoi[c] for c in s] # encoder: take a string, output a list of integers +decode = lambda l: ''.join([itos[i] for i in l]) # decoder: take a list of integers, output a string + +# Train and test splits +data = torch.tensor(encode(text), dtype=torch.long) +n = int(0.9*len(data)) # first 90% will be train, rest val +train_data = data[:n] +val_data = data[n:] + +# data loading +def get_batch(split): + # generate a small batch of data of inputs x and targets y + data = train_data if split == 'train' else val_data + ix = torch.randint(len(data) - block_size, (batch_size,)) + x = torch.stack([data[i:i+block_size] for i in ix]) + y = torch.stack([data[i+1:i+block_size+1] for i in ix]) + x, y = x.to(device), y.to(device) + return x, y + +@torch.no_grad() +def estimate_loss(): + out = {} + model.eval() + for split in ['train', 'val']: + losses = torch.zeros(eval_iters) + for k in range(eval_iters): + X, Y = get_batch(split) + logits, loss = model(X, Y) + losses[k] = loss.item() + out[split] = losses.mean() + model.train() + return out + +# super simple bigram model +class BigramLanguageModel(nn.Module): + + def __init__(self, vocab_size): + super().__init__() + self.token_embedding_table = nn.Embedding(vocab_size, vocab_size) + + def forward(self, idx, targets=None): + + logits = self.token_embedding_table(idx) # (B,T,C) + + if targets is None: + loss = None + else: + B, T, C = logits.shape + logits = logits.view(B*T, C) + targets = targets.view(B*T) + loss = F.cross_entropy(logits, targets) + + return logits, loss + + def generate(self, idx, max_new_tokens): + # idx is (B, T) array of indices in the current context + for _ in range(max_new_tokens): + # get the predictions + logits, loss = self(idx) + # focus only on the last time step + logits = logits[:, -1, :] # becomes (B, C) + # apply softmax to get probabilities + probs = F.softmax(logits, dim=-1) # (B, C) + # sample from the distribution + idx_next = torch.multinomial(probs, num_samples=1) # (B, 1) + # append sampled index to the running sequence + idx = torch.cat((idx, idx_next), dim=1) # (B, T+1) + return idx + +model = BigramLanguageModel(vocab_size) +m = model.to(device) + +# create a PyTorch optimizer +optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate) + +for iter in range(max_iters): + + # every once in a while evaluate the loss on train and val sets + if iter % eval_interval == 0: + losses = estimate_loss() + print(f"step {iter}: train loss {losses['train']:.4f}, val loss {losses['val']:.4f}") + + # sample a batch of data + xb, yb = get_batch('train') + + # evaluate the loss + logits, loss = model(xb, yb) + optimizer.zero_grad(set_to_none=True) + loss.backward() + optimizer.step() + +# generate from the model +context = torch.zeros((1, 1), dtype=torch.long, device=device) +print(decode(m.generate(context, max_new_tokens=500)[0].tolist())) diff --git a/Lecture/ln.ipynb b/Lecture/ln.ipynb new file mode 100644 index 0000000..8f855cb --- /dev/null +++ b/Lecture/ln.ipynb @@ -0,0 +1,260 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "在上一节中,我们简单介绍了nanoGPT的基本方式,但是我们也能看出这个GPT过于简陋,其生成效果急需进一步的提高\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 首先是编码,嵌入方式的改进\n", + "- 之前我们采用的是简单的一一对应的方式\n", + " - 对于只有26个大写和26个小写的英文字符来说,这样似乎还算合理,因为只是把50多个或者60多个字符按照顺序去编码为对应的数字而已" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- 例如,OpenAI 在之前的GPT-2,GPT-3,GPT-4系列中使用的是其\n", + "发布的 tiktoken 库,而 Google 也有其自己的分词/编码工具 SentencePiece,他们只是\n", + "不同的方式,但做的都是将“完整的句子转化成整数向量编码”这样的一件事情。例\n", + "如,我们现在可以利用 tiktoken 库来十分方便地调用 GPT-2 中所训练的 tokenizer,从\n", + "而实现编解码过程" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[37046, 21043, 10890, 247, 162, 224, 253, 35894]\n", + "我是孙悟空\n" + ] + } + ], + "source": [ + "# Way2\n", + "import tiktoken\n", + "enc = tiktoken.get_encoding(\"cl100k_base\")\n", + "# enc = tiktoken.get_encoding(\"gpt2\")\n", + "print(enc.encode(\"我是孙悟空\"))\n", + "print(enc.decode(enc.encode(\"我是孙悟空\")))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "之前经过encoder之后的数量不会改变" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "489540" + ] + }, + "execution_count": 2, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# 还是跟之前一样,我们先进行文本的读取\n", + "with open('../data/Xiyou.txt', 'r', encoding='utf-8') as f:\n", + " text = f.read()\n", + "\n", + "n = int(0.5*len(text)) # 前90%都是训练集,后10%都是测试集\n", + "text = text[:n]\n", + "\n", + " # 对文本进行编码\n", + "len(enc.encode(text)) # 获取编码之后的长度" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "接下来我们检查一下这样是否work、以及这样是否可以提升性能" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "import torch\n", + "import torch.nn.functional as F\n", + "import torch.nn as nn\n", + "\n", + "\n", + "# 开始划分训练集和测试集\n", + "code = enc.encode(text)\n", + "data = torch.tensor(code, dtype=torch.long) # Way 1 \n", + "\n", + "vocab_size = len(code)\n", + "n = int(0.9*len(data)) # 前90%都是训练集,后10%都是测试集\n", + "train_data = data[:n]\n", + "val_data = data[n:]\n", + "\n", + "# 进行参数的设置\n", + "device = 'cpu' # 模型运行的设备\n", + "block_size = 16 # 每个单元的最大长度\n", + "batch_size = 32 # 同时运行的批次大小\n", + "learning_rate = 0.3\n", + "max_iters = 1000\n", + "eval_interval = 300 # 对当前模型的运行结果进行评估的epoch数量\n", + "eval_iters = 200\n", + "\n", + "# 每次从数据集中获取x和y\n", + "def get_batch(split):\n", + " # generate a small batch of data of inputs x and targets y\n", + " data = train_data if split == 'train' else val_data\n", + " ix = torch.randint(len(data) - block_size, (batch_size,))\n", + " x = torch.stack([data[i:i+block_size] for i in ix])\n", + " y = torch.stack([data[i+1:i+block_size+1] for i in ix])\n", + " return x, y\n", + "\n", + "class BLM(nn.Module):\n", + " def __init__(self,vocab_size):\n", + " super().__init__()\n", + " self.token_embedding_table = nn.Embedding(vocab_size,vocab_size)\n", + " \n", + " def forward(self,idx,targets = None):\n", + " # 这里的self,就是我们之前的x,target就是之前的y\n", + " logits = self.token_embedding_table(idx) # (B,T) -> (B,T,C) # 这里我们通过Embedding操作直接得到预测分数\n", + " # 这里的预测分数过程与二分类或者多分类的分数是大致相同的\n", + "\n", + " \n", + " if targets is None:\n", + " loss = None\n", + " else: \n", + " B, T, C = logits.shape\n", + " logits = logits.view(B*T, C)\n", + " targets = targets.view(B*T) # 这里我们调整一下形状,以符合torch的交叉熵损失函数对于输入的变量的要求\n", + " loss = F.cross_entropy(logits, targets)\n", + "\n", + " return logits, loss\n", + "\n", + " def generate(self, idx, max_new_tokens):\n", + " '''\n", + " idx 是现在的输入的(B, T) array of indices in the current context\n", + " max_new_tokens 是产生的最大的tokens数量\n", + " '''\n", + "\n", + " for _ in range(max_new_tokens):\n", + " # 得到预测的结果\n", + " logits, loss = self(idx)\n", + " \n", + " # 只关注最后一个的预测\n", + " logits = logits[:, -1, :] # becomes (B, C)\n", + " # 对概率值应用softmax\n", + " probs = F.softmax(logits, dim=-1) # (B, C)\n", + " # 对input的每一行做n_samples次取值,输出的张量是每一次取值时input张量对应行的下标,也即找到概率值输出最大的下标,也对应着最大的编码\n", + " idx_next = torch.multinomial(probs, num_samples=1) # (B, 1)\n", + " # 将新产生的编码加入到之前的编码中,形成新的编码\n", + " idx = torch.cat((idx, idx_next), dim=1) # (B, T+1)\n", + " return idx " + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "ename": "", + "evalue": "", + "output_type": "error", + "traceback": [ + "\u001b[1;31m在当前单元格或上一个单元格中执行代码时 Kernel 崩溃。\n", + "\u001b[1;31m请查看单元格中的代码,以确定故障的可能原因。\n", + "\u001b[1;31m单击此处了解详细信息。\n", + "\u001b[1;31m有关更多详细信息,请查看 Jupyter log。" + ] + } + ], + "source": [ + "# 创建模型\n", + "model = BLM(vocab_size)\n", + "m = model.to(device)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# 简单的写下\n", + "optimizer = torch.optim.AdamW(m.parameters(), lr=learning_rate)\n", + "for steps in range(1000): # 随着迭代次数的增长,获得的效果会不断变好\n", + "\n", + " xb, yb = get_batch('train')\n", + "\n", + " logits, loss = m(xb, yb) # 用于评估损失\n", + " optimizer.zero_grad(set_to_none=True)\n", + " loss.backward()\n", + " optimizer.step()\n", + " print(epoch)\n", + " print(loss.item())\n", + "\n", + "print(loss.item())\n", + "\n", + "optimizer.step()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(enc.decode(m.generate(idx = torch.zeros((1, 1), dtype=torch.long), max_new_tokens=500)[0].tolist()))" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.18" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/README.md b/README.md index c46a1d0..e9a76d3 100644 --- a/README.md +++ b/README.md @@ -74,7 +74,17 @@ > Lecture1 : [教程初衷](Lecture/l1.ipynb) > -> Lecture2 : [数据集处理与搜集](Lecture/l2.ipynb) +> Lecture2 : [基础GPT框架构造与初步效果](Lecture/l2.ipynb) ,[视频在制作中 ] +> +> Lecture3 : [数学推导与模型优化](Lecture/l3.ipynb) ,[视频在制作中 ] +> +> Lecture4 : [对话能力实现](Lecture/l4.ipynb) ,[视频在制作中 ] + +> +> +> + +> Lecture+ : [对于编码解码方式的讨论](Lecture/ln.ipynb) ## 💪 对话能力实现 主要参考[VatsaDev/nanoChatGPT: nanogpt turned into a chat model (github.com)](https://github.com/VatsaDev/nanoChatGPT)