代码拉取完成,页面将自动刷新
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>浩翰Redamancy的博客</title>
<subtitle>文质彬彬 然后君子</subtitle>
<link href="/atom.xml" rel="self"/>
<link href="https://plutoacharon.github.io/"/>
<updated>2020-05-17T14:18:58.021Z</updated>
<id>https://plutoacharon.github.io/</id>
<author>
<name>浩翰</name>
</author>
<generator uri="http://hexo.io/">Hexo</generator>
<entry>
<title>解决docker中修改docker.daemon文件后启动失败</title>
<link href="https://plutoacharon.github.io/2020/05/17/%E8%A7%A3%E5%86%B3docker%E4%B8%AD%E4%BF%AE%E6%94%B9docker-daemon%E6%96%87%E4%BB%B6%E5%90%8E%E5%90%AF%E5%8A%A8%E5%A4%B1%E8%B4%A5/"/>
<id>https://plutoacharon.github.io/2020/05/17/解决docker中修改docker-daemon文件后启动失败/</id>
<published>2020-05-17T14:18:14.000Z</published>
<updated>2020-05-17T14:18:58.021Z</updated>
<content type="html"><![CDATA[<h2 id="在-docker-配置文件中设置"><a href="#在-docker-配置文件中设置" class="headerlink" title="在 docker 配置文件中设置"></a>在 docker 配置文件中设置</h2><p>docker 1.12 版本之后, 建议在 docker 的 js 配置文件中配置, 路径为 /etc/docker/daemon.json 默认没有这个文件, 可以手动创建此文件, docker 启动时默认会读取此配置文件<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">{</span><br><span class="line"> "registry-mirrors": ["https://6y2639ye.mirror.aliyuncs.com"]</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>我这里配置的加速源</p><p>在一次误操作中 动了<code>/usr/lib/systemd/system/docker.service</code>下的文件 报错:<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]# systemctl status docker.service</span><br><span class="line">● docker.service - Docker Application Container Engine</span><br><span class="line"> Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)</span><br><span class="line"> Active: failed (Result: start-limit) since 四 2020-05-14 10:19:16 CST; 25s ago</span><br><span class="line"> Docs: https://docs.docker.com</span><br><span class="line"> Process: 2493 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)</span><br><span class="line"> Main PID: 2493 (code=exited, status=1/FAILURE)</span><br><span class="line"></span><br><span class="line">5月 14 10:19:14 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.</span><br><span class="line">5月 14 10:19:14 localhost.localdomain systemd[1]: Unit docker.service entered failed state.</span><br><span class="line">5月 14 10:19:14 localhost.localdomain systemd[1]: docker.service failed.</span><br><span class="line">5月 14 10:19:16 localhost.localdomain systemd[1]: docker.service holdoff time over, scheduling restart.</span><br><span class="line">5月 14 10:19:16 localhost.localdomain systemd[1]: Stopped Docker Application Container Engine.</span><br><span class="line">5月 14 10:19:16 localhost.localdomain systemd[1]: start request repeated too quickly for docker.service</span><br><span class="line">5月 14 10:19:16 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.</span><br><span class="line">5月 14 10:19:16 localhost.localdomain systemd[1]: Unit docker.service entered failed state.</span><br><span class="line">5月 14 10:19:16 localhost.localdomain systemd[1]: docker.service failed.</span><br></pre></td></tr></table></figure></p><h2 id="解决"><a href="#解决" class="headerlink" title="解决"></a>解决</h2><p> 是因为 docker 的 socket 配置出现了冲突, 接下来查看 docker 的启动入口文件<br> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">> vim /lib/systemd/system/docker.service # Ubuntu的路径; CentOS 的路径为: /usr/lib/systemd/system/docker.service</span><br><span class="line"></span><br><span class="line">ExecStart=/usr/bin/dockerd -H fd://</span><br><span class="line">修改为</span><br><span class="line">ExecStart=/usr/bin/dockerd</span><br></pre></td></tr></table></figure></p><p>从上面可以看出, 在 docker 的启动入口文件中配置了 host 相关的信息, 而在 docker 的配置文件中也配置了 host 的信息, 所以发生了冲突. 解决办法, 建议将 docker 启动入口文件中的 -H fd:// 删除, 再重启 docker 服务即可<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># systemctl daemon-reload</span></span><br><span class="line">[root@localhost ~]<span class="comment"># systemctl start docker</span></span><br></pre></td></tr></table></figure></p>]]></content>
<summary type="html">
<h2 id="在-docker-配置文件中设置"><a href="#在-docker-配置文件中设置" class="headerlink" title="在 docker 配置文件中设置"></a>在 docker 配置文件中设置</h2><p>docker 1.12 版本
</summary>
<category term="Docker" scheme="https://plutoacharon.github.io/categories/Docker/"/>
<category term="Dokcer" scheme="https://plutoacharon.github.io/tags/Dokcer/"/>
</entry>
<entry>
<title>Python垃圾回收与内存管理</title>
<link href="https://plutoacharon.github.io/2020/05/12/Python%E5%9E%83%E5%9C%BE%E5%9B%9E%E6%94%B6%E4%B8%8E%E5%86%85%E5%AD%98%E7%AE%A1%E7%90%86/"/>
<id>https://plutoacharon.github.io/2020/05/12/Python垃圾回收与内存管理/</id>
<published>2020-05-12T14:42:31.000Z</published>
<updated>2020-05-12T14:42:53.246Z</updated>
<content type="html"><![CDATA[<p>@[toc]</p><h1 id="Python垃圾回收"><a href="#Python垃圾回收" class="headerlink" title="Python垃圾回收"></a>Python垃圾回收</h1><p>引用计数器为主,标记清除和分代回收为辅+缓存机制</p><h2 id="1-引用计数器"><a href="#1-引用计数器" class="headerlink" title="1. 引用计数器"></a>1. 引用计数器</h2><h3 id="1-1-环状双向链表-refchain"><a href="#1-1-环状双向链表-refchain" class="headerlink" title="1.1 环状双向链表 refchain"></a>1.1 环状双向链表 refchain</h3><p>在Python程序中创建的任何对象都会放在<code>refchain</code>中</p><p><code>static PyObject refchain = {&refchain, &refchain}</code></p><p><img src="https://img-blog.csdnimg.cn/20200509213439801.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>在Python程序中创建的任何对象都会放在<code>refchain</code>链表中</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">str1 = <span class="string">"str"</span></span><br><span class="line">num1 = <span class="number">1</span></span><br><span class="line">list1 = [<span class="string">"1"</span>,<span class="string">"2"</span>]</span><br></pre></td></tr></table></figure><p>当进行上述操作时,Python内部会创建一些数据(上一个对象,下一个对象,类型,引用个数,元素个数)</p><p><code>include/object.h</code></p><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#<span class="meta-keyword">define</span> _PyObject_HEAD_EXTRA \</span></span><br><span class="line"> <span class="class"><span class="keyword">struct</span> _<span class="title">object</span> *_<span class="title">ob_next</span>;</span> \</span><br><span class="line"> <span class="class"><span class="keyword">struct</span> _<span class="title">object</span> *_<span class="title">ob_prev</span>;</span></span><br><span class="line"> </span><br><span class="line"><span class="meta">#<span class="meta-keyword">define</span> PyObject_HEAD PyObject ob_base;</span></span><br><span class="line"> </span><br><span class="line"><span class="meta">#<span class="meta-keyword">define</span> PyObject_VAR_HEAD PyVarObject ob_base;</span></span><br><span class="line"> </span><br><span class="line"> </span><br><span class="line"><span class="keyword">typedef</span> <span class="class"><span class="keyword">struct</span> _<span class="title">object</span> {</span></span><br><span class="line"> _PyObject_HEAD_EXTRA <span class="comment">// 用于构造双向链表</span></span><br><span class="line"> Py_ssize_t ob_refcnt; <span class="comment">// 引用计数器</span></span><br><span class="line"> <span class="class"><span class="keyword">struct</span> _<span class="title">typeobject</span> *<span class="title">ob_type</span>;</span> <span class="comment">// 数据类型</span></span><br><span class="line">} PyObject;</span><br><span class="line"> </span><br><span class="line"> </span><br><span class="line"><span class="keyword">typedef</span> <span class="class"><span class="keyword">struct</span> {</span></span><br><span class="line"> PyObject ob_base; <span class="comment">// PyObject对象</span></span><br><span class="line"> Py_ssize_t ob_size; <span class="comment">/* Number of items in variable part,即:元素个数 */</span></span><br><span class="line">} PyVarObject;</span><br></pre></td></tr></table></figure><p>2个结构体</p><ul><li><strong>PyObject</strong>,此结构体中包含3个元素。<ul><li>_PyObject_HEAD_EXTRA,用于构造双向链表。</li><li>ob_refcnt,引用计数器。</li><li>ob_type,数据类型。</li></ul></li><li><strong>PyVarObject</strong>,次结构体中包含4个元素(ob_base中包含3个元素)<ul><li>ob_base,PyObject结构体对象,即:包含PyObject结构体中的三个元素。</li><li>ob_size,内部元素个数。</li></ul></li></ul><p>3个宏定义</p><ul><li>PyObject_HEAD,代指PyObject结构体。</li><li>PyVarObject_HEAD,代指PyVarObject对象。</li><li>_PyObject_HEAD_EXTRA,代指前后指针,用于构造双向队列。</li></ul><p>Python中所有类型创建对象时,底层都是与PyObject和PyVarObject结构体实现,一般情况下由单个元素组成对象内部会使用PyObject结构体(float)、由多个元素组成的对象内部会使用PyVarObject结构体(str/int/list/dict/tuple/set/自定义类),因为由多个元素组成的话是需要为其维护一个 ob_size(内部元素个数)。</p><p><strong>PyObject:float</strong></p><p><strong>PyVarObject:list、dict、tuple、set、int、str、bool</strong></p><p>因为Python中的int是不限制长度的,所以底层实现是用的str,所以int也属于PyVarObject阵营。Python中的bool实际上是0和1,所以也是int,也属于PyVarObject阵营。</p><h3 id="1-2-类型封装结构体"><a href="#1-2-类型封装结构体" class="headerlink" title="1.2 类型封装结构体"></a>1.2 类型封装结构体</h3><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">// float类型</span></span><br><span class="line"><span class="keyword">typedef</span> <span class="class"><span class="keyword">struct</span> {</span></span><br><span class="line"> PyObject_HEAD</span><br><span class="line"> <span class="keyword">double</span> ob_fval;</span><br><span class="line">} PyFloatObject;</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">data = <span class="number">1.11</span></span><br><span class="line">内部会创建:</span><br><span class="line"> _ob_netx = refchain的上一个对象</span><br><span class="line"> _ob_prev = refchain的下一个对象</span><br><span class="line"> ob_refcnt = <span class="number">1</span> </span><br><span class="line"> ob_type = float</span><br><span class="line"> ob_fval = <span class="number">1.11</span></span><br></pre></td></tr></table></figure><h3 id="1-3-引用计数器"><a href="#1-3-引用计数器" class="headerlink" title="1.3 引用计数器"></a>1.3 引用计数器</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">v1 = <span class="number">1.11</span></span><br><span class="line">v2 = <span class="number">1</span></span><br><span class="line">v3 = (<span class="number">1</span>,<span class="number">2</span>,<span class="number">3</span>)</span><br></pre></td></tr></table></figure><p>当python程序运行时,会根据数据类型的不同找到对应的结构体,根据结构体中的字段来进行创建相关的数据,然后将对象添加到refchain双线链表中。</p><p>每个对象中有<code>ob_refcnt</code>引用计数器,值默认为1,当有其他变量引用对象时,引用计数器就会发生变化。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">a = <span class="number">1</span></span><br><span class="line">b = a</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">a = <span class="number">1</span></span><br><span class="line">b = a</span><br><span class="line"><span class="keyword">del</span> b <span class="comment"># b变量删除: b对应的对象引用器-1</span></span><br><span class="line"><span class="keyword">del</span> a <span class="comment"># a变量删除: a对用的对象引用其-1</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># 当一个对象的引用计数器为0时,意味着没有人使用这个对象, 这个对象就是垃圾, 垃圾回收</span></span><br><span class="line"><span class="comment"># 回收: </span></span><br><span class="line">- 对象从refchain链表中移除</span><br><span class="line">- 将对象销毁, 内存归还</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 创建对象并初始化引用计数器为1</span></span><br><span class="line">num1 = <span class="number">1</span></span><br><span class="line">num2 = num1 <span class="comment"># 计数器+1</span></span><br><span class="line">num3 = num1 <span class="comment"># 计数器+1</span></span><br><span class="line">num4 = num1 <span class="comment"># 计数器+1</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># 创建对象并初始化引用计数器为1</span></span><br><span class="line">str1 = <span class="string">"str"</span> <span class="comment"># 计数器+1</span></span><br><span class="line">str2 = str1 <span class="comment"># 计数器+1</span></span><br></pre></td></tr></table></figure><p><img src="https://img-blog.csdnimg.cn/20200509213502784.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="1-4-循环引用的问题"><a href="#1-4-循环引用的问题" class="headerlink" title="1.4 循环引用的问题"></a>1.4 循环引用的问题</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">list1 = [<span class="number">1</span>,<span class="number">2</span>,<span class="number">3</span>] </span><br><span class="line">list2 = [<span class="number">1</span>,<span class="number">2</span>,<span class="number">3</span>]</span><br><span class="line">list1.append(list2) <span class="comment"># 把v2追加到v1中, v2对应的引用计数器加1</span></span><br><span class="line">list2.append(list1) <span class="comment"># 把v1追加到v2中, v1对应的引用计数器加1</span></span><br></pre></td></tr></table></figure><p> list1与list2相互引用,如果不存在其他对象对它们的引用,list1与list2的引用计数也仍然为1,所占用的内存永远无法被回收,这将是致命的。</p><p> 对于如今的强大硬件,缺点1尚可接受,但是循环引用导致内存泄露,注定python还将引入新的回收机制。</p><h2 id="2-标记清除"><a href="#2-标记清除" class="headerlink" title="2. 标记清除"></a>2. 标记清除</h2><p>目的:为了解决引用计数器循环引用的不足</p><p>实现:在Python的底层再维护一个链表,链表中专门放可能存在循环引用的对象(list/tuple/dict/set)</p><p><img src="https://img-blog.csdnimg.cn/20200509213523566.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="\[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Y2sWHo9q-1589031264635)(../../Images/image-20200509205236478.png)\]"></p><p>在Python内部<code>某种情况</code>触发, 会去扫描<code>可能存在循环应用的链表</code>中的每个元素, 检查是否有循环引用, 如果有则让双方的引用计数器-1; 如果是0则进行垃圾回收</p><p>问题:</p><ul><li>什么时候扫描</li><li>可能存在循环引用的链表扫描代价大,每次扫描时间久</li></ul><h2 id="3-分代回收"><a href="#3-分代回收" class="headerlink" title="3. 分代回收"></a>3. 分代回收</h2><p>将可能存在循环应用的对象维护成3个链表:</p><ul><li>0代:0代中对象的个数达到700个扫描一次</li><li>1代:0代扫描10次,则1代扫描一次</li><li>2代:1代扫描10次,则2代扫描一次</li></ul><h2 id="4-小结"><a href="#4-小结" class="headerlink" title="4. 小结"></a>4. 小结</h2><p>在Python中维护了一个<code>refchain</code>的双向环状链表, 这个链表中存储程序创建的所有对象, 每种类型的对象中都有一个<code>ob_refcnt</code>引用计数器的值, 引用个数+1, -1 , 最后当引用计数器变成0时会进行垃圾回收(对象销毁, 从refchain中移除)</p><p>但是. 在Python中对于那些可以有多个元素组成的对象可能会存在循环引用的问题, 为了解决这个问题Python引入了标记清除和分带回收, 在其内部维护了4个链表</p><ul><li>refchain</li><li>0代</li><li>1代</li><li>2代</li></ul><p>在源码内部当达到各自的阈值时, 就会触发扫描链表进行标记清除的动作(有循环则各自-1)</p><h1 id="Python缓存"><a href="#Python缓存" class="headerlink" title="Python缓存"></a>Python缓存</h1><h2 id="1-池"><a href="#1-池" class="headerlink" title="1. 池"></a>1. 池</h2><p>为了避免重复创建和销毁一些常见对象, Python建立了维护池</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 启动解释器时, python内部帮我们创建: -5,-4...257</span></span><br><span class="line">v1 = <span class="number">7</span> <span class="comment"># 内部不会开辟内存, 直接去池中获取</span></span><br><span class="line">v2 = <span class="number">8</span> <span class="comment"># 内部不会开辟内存, 直接去池中获取</span></span><br></pre></td></tr></table></figure><h2 id="2-free-list"><a href="#2-free-list" class="headerlink" title="2. free_list"></a>2. free_list</h2><p>当一个对象的引用计数器为0时, 按理说应该回收, 但是内部不会直接回收, 而是将对象添加到<code>free_list</code>链表中当缓存。以后再去创建对象时,不再重新开辟内存,而是直接使用<code>free_list</code></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">v1 = <span class="number">1.11</span> <span class="comment"># 开辟内存, 内存存储结构体中定义那几个值, 并存到refchain中</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">del</span> v1 <span class="comment"># refchain中移除, 将对象添加到free_list中(80)个, free_list满了则销毁</span></span><br><span class="line"></span><br><span class="line">v2 = <span class="number">2.22</span> <span class="comment"># 不会重新开辟内存, 去free_list中获取对象, 对象内部数据初始化, 再放到refchain中</span></span><br></pre></td></tr></table></figure><ul><li><p>float类型,维护的free_list链表最多可缓存100个float对象。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">v1 = <span class="number">3.14</span> <span class="comment"># 开辟内存来存储float对象,并将对象添加到refchain链表。 </span></span><br><span class="line">print( id(v1) ) <span class="comment"># 内存地址:4436033488 </span></span><br><span class="line"><span class="keyword">del</span> v1 <span class="comment"># 引用计数器-1,如果为0则在rechain链表中移除,不销毁对象,而是将对象添加到float的free_list. </span></span><br><span class="line">v2 = <span class="number">9.999</span> <span class="comment"># 优先去free_list中获取对象,并重置为9.999,如果free_list为空才重新开辟内存。 </span></span><br><span class="line">print( id(v2) ) <span class="comment"># 内存地址:4436033488 </span></span><br><span class="line"><span class="comment"># 注意:引用计数器为0时,会先判断free_list中缓存个数是否满了,未满则将对象缓存,已满则直接将对象销毁。</span></span><br></pre></td></tr></table></figure></li><li><p>int类型,不是基于free_list,而是维护一个small_ints链表保存常见数据(小数据池),小数据池范围:<code>-5 <= value < 257</code>。即:重复使用这个范围的整数时,不会重新开辟内存。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">v1 = <span class="number">38</span> <span class="comment"># 去小数据池small_ints中获取38整数对象,将对象添加到refchain并让引用计数器+1。 </span></span><br><span class="line">print( id(v1)) <span class="comment">#内存地址:4514343712 </span></span><br><span class="line">v2 = <span class="number">38</span> <span class="comment"># 去小数据池small_ints中获取38整数对象,将refchain中的对象的引用计数器+1。 </span></span><br><span class="line">print( id(v2) ) <span class="comment">#内存地址:4514343712 </span></span><br><span class="line"><span class="comment"># 注意:在解释器启动时候-5~256就已经被加入到small_ints链表中且引用计数器初始化为1,代码中使用的值时直接去small_ints中拿来用并将引用计数器+1即可。另外,small_ints中的数据引用计数器永远不会为0(初始化时就设置为1了),所以也不会被销毁。</span></span><br></pre></td></tr></table></figure></li><li><p>str类型,维护<code>unicode_latin1[256]</code>链表,内部将所有的<code>ascii字符</code>缓存起来,以后使用时就不再反复创建。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">v1 = <span class="string">"A"</span> </span><br><span class="line">print( id(v1) ) <span class="comment"># 输出:4517720496 </span></span><br><span class="line"><span class="keyword">del</span> v1 v2 = <span class="string">"A"</span> </span><br><span class="line">print( id(v1) ) <span class="comment"># 输出:4517720496 # 除此之外,Python内部还对字符串做了驻留机制,针对那么只含有字母、数字、下划线的字符串(见源码Objects/codeobject.c),如果内存中已存在则不会重新在创建而是使用原来的地址里(不会像free_list那样一直在内存存活,只有内存中有才能被重复利用)。 </span></span><br><span class="line">v1 = <span class="string">"wupeiqi"</span> </span><br><span class="line">v2 = <span class="string">"wupeiqi"</span> </span><br><span class="line">print(id(v1) == id(v2)) <span class="comment"># 输出:True</span></span><br></pre></td></tr></table></figure></li><li><p>list类型,维护的free_list数组最多可缓存80个list对象。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">v1 = [<span class="number">11</span>,<span class="number">22</span>,<span class="number">33</span>] </span><br><span class="line">print( id(v1) ) <span class="comment"># 输出:4517628816 </span></span><br><span class="line"><span class="keyword">del</span> v1 v2 = [<span class="string">"武"</span>,<span class="string">"沛齐"</span>] </span><br><span class="line">print( id(v2) ) <span class="comment"># 输出:4517628816</span></span><br></pre></td></tr></table></figure></li><li><p>tuple类型,维护一个free_list数组且数组容量20,数组中元素可以是链表且每个链表最多可以容纳2000个元组对象。元组的free_list数组在存储数据时,是按照元组可以容纳的个数为索引找到free_list数组中对应的链表,并添加到链表中。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">v1 = (<span class="number">1</span>,<span class="number">2</span>) </span><br><span class="line">print( id(v1) ) </span><br><span class="line"><span class="keyword">del</span> v1 <span class="comment"># 因元组的数量为2,所以会把这个对象缓存到free_list[2]的链表中。 </span></span><br><span class="line">v2 = (<span class="string">"武沛齐"</span>,<span class="string">"Alex"</span>) <span class="comment"># 不会重新开辟内存,而是去free_list[2]对应的链表中拿到一个对象来使用。 </span></span><br><span class="line">print( id(v2) )</span><br></pre></td></tr></table></figure></li><li><p>dict类型,维护的free_list数组最多可缓存80个dict对象。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">v1 = {<span class="string">"k1"</span>:<span class="number">123</span>} </span><br><span class="line"> print( id(v1) ) <span class="comment"># 输出:4515998128 </span></span><br><span class="line"> <span class="keyword">del</span> v1 v2 = {<span class="string">"name"</span>:<span class="string">"武沛齐"</span>,<span class="string">"age"</span>:<span class="number">18</span>,<span class="string">"gender"</span>:<span class="string">"男"</span>} </span><br><span class="line"> print( id(v1) ) <span class="comment"># 输出:4515998128</span></span><br></pre></td></tr></table></figure></li></ul><p>这个老师讲的通俗易懂, 非常棒, 更多详细的解释:<code>https://pythonav.com/wiki/detail/6/88/</code></p><p>参考资料:</p><p><code>https://www.bilibili.com/video/BV1Ei4y1b7mo?p=2</code></p><p><code>https://my.oschina.net/hebianxizao/blog/57367</code></p><p><code>https://www.cnblogs.com/wupeiqi/articles/11507404.html</code></p>]]></content>
<summary type="html">
<p>@[toc]</p>
<h1 id="Python垃圾回收"><a href="#Python垃圾回收" class="headerlink" title="Python垃圾回收"></a>Python垃圾回收</h1><p>引用计数器为主,标记清除和分代回收为辅+缓存机制
</summary>
<category term="Python" scheme="https://plutoacharon.github.io/categories/Python/"/>
<category term="Python" scheme="https://plutoacharon.github.io/tags/Python/"/>
</entry>
<entry>
<title>git push文件夹时报错Fatal: HttpRequestException encountered.</title>
<link href="https://plutoacharon.github.io/2020/05/12/git-push%E6%96%87%E4%BB%B6%E5%A4%B9%E6%97%B6%E6%8A%A5%E9%94%99Fatal-HttpRequestException-encountered/"/>
<id>https://plutoacharon.github.io/2020/05/12/git-push文件夹时报错Fatal-HttpRequestException-encountered/</id>
<published>2020-05-12T14:41:22.000Z</published>
<updated>2020-05-12T14:42:16.458Z</updated>
<content type="html"><![CDATA[<p>在使用git push时报出如下的错误:<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">$ git push -u origin master</span><br><span class="line">fatal: HttpRequestException encountered.</span><br><span class="line"> 发送请求时出错。</span><br><span class="line">fatal: HttpRequestException encountered.</span><br><span class="line"> 发送请求时出错。</span><br><span class="line">Username <span class="keyword">for</span> <span class="string">'https://github.com'</span>:</span><br></pre></td></tr></table></figure></p><p>之前时不需要输入的,现在需要输入了,原因是git更新了一个证书,我们本地需要再更新以下:<br><a href="https://github.com/microsoft/Git-Credential-Manager-for-Windows/releases" target="_blank" rel="noopener">https://github.com/microsoft/Git-Credential-Manager-for-Windows/releases</a><br>进去后点击下载安装 GCMW最新版即可:<br><img src="https://img-blog.csdnimg.cn/20200506152021834.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p>]]></content>
<summary type="html">
<p>在使用git push时报出如下的错误:<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="lin
</summary>
<category term="GitHub" scheme="https://plutoacharon.github.io/categories/GitHub/"/>
<category term="GitHub" scheme="https://plutoacharon.github.io/tags/GitHub/"/>
</entry>
<entry>
<title>HA高可用与负载均衡入门到实战(八)---- 基于Docker配置NFS实现Nginx动静分离</title>
<link href="https://plutoacharon.github.io/2020/05/12/HA%E9%AB%98%E5%8F%AF%E7%94%A8%E4%B8%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E6%88%98%EF%BC%88%E5%85%AB%EF%BC%89-%E5%9F%BA%E4%BA%8EDocker%E9%85%8D%E7%BD%AENFS%E5%AE%9E%E7%8E%B0Nginx%E5%8A%A8%E9%9D%99%E5%88%86%E7%A6%BB/"/>
<id>https://plutoacharon.github.io/2020/05/12/HA高可用与负载均衡入门到实战(八)-基于Docker配置NFS实现Nginx动静分离/</id>
<published>2020-05-12T14:40:37.000Z</published>
<updated>2020-05-12T14:40:53.177Z</updated>
<content type="html"><![CDATA[<h2 id="NFS介绍"><a href="#NFS介绍" class="headerlink" title="NFS介绍"></a>NFS介绍</h2><p>NFS 是Network File System的缩写,即网络文件系统。一种使用于分散式文件系统的协定,由Sun公司开发,于1984年向外公布。功能是通过网络让不同的机器、不同的操作系统能够彼此分享个别的数据,让应用程序在客户端通过网络访问位于服务器磁盘中的数据,是在类Unix系统间实现磁盘文件共享的一种方法。</p><p>NFS 的基本原则是“容许不同的客户端及服务端通过一组RPC分享相同的文件系统”,它是独立于操作系统,容许不同硬件及操作系统的系统共同进行文件的分享。</p><p>NFS在文件传送或信息传送过程中依赖于RPC协议。RPC,远程过程调用 (Remote Procedure Call) 是能使客户端执行其他系统中程序的一种机制。NFS本身是没有提供信息传输的协议和功能的,但NFS却能让我们通过网络进行资料的分享,这是因为NFS使用了一些其它的传输协议。而这些传输协议用到这个RPC功能的。可以说NFS本身就是使用RPC的一个程序。或者说NFS也是一个RPC SERVER。所以只要用到NFS的地方都要启动RPC服务,不论是NFS SERVER或者NFS CLIENT。这样SERVER和CLIENT才能通过RPC来实现PROGRAM PORT的对应。可以这么理解RPC和NFS的关系:NFS是一个文件系统,而RPC是负责负责信息的传输。</p><h2 id="什么是RPC"><a href="#什么是RPC" class="headerlink" title="什么是RPC"></a>什么是RPC</h2><p>由于NFS支持的功能相当多,而不同的功能都会使用不同的程序来启动,每启动一个功能就会启用一些端口来传输数据,因此,NFS的功能所对应的端口才无法固定,而是随机取用一些未使用的端口来作为传输之用,其中centos5.x随机端口为小于1024的,而centos6.x随机端口都是较大的。</p><p>因为端口不固定,这样一来就会造成客户端与NFS服务器端的通讯障碍,由于NFS客户端必须要知道NFS服务器端的数据传输端口才能进行通信交互数据。</p><p>解决以上问题,我们需要RPC服务来帮忙,NFS的RPC服务主要的功能是记录每个NFS功能所对应的端口号,并且在NFS客户端请求时将该端口和功能对应的信息传递给请求数据的NFS客户端,从而可以确保客户端连接正确的NFS端口上去,达到实现数据传输交互数据目的。RPC相当于NFS服务的中介。</p><p>如图所示:NFS工作流程简图</p><p><img src="https://img-blog.csdnimg.cn/20200430190602221.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><p>大致如以下几点:</p><p>1、首先用户访问网站程序,由程序在NFS客户端上发出NFS文件存取功能的询问请求,这时NFS客户端(即执行程序的服务器)RPC服务(portmap或rpcbind服务)就会通过网络向NFS服务端的RPC服务(portmap或rpcbind)的111端口发出NFS文件存取功能的询问请求。</p><p>2、NFS服务器端的RPC服务(即portmap或rpcbind)找到对应的已注册的NFS daemon端口后,通知NFS客户端的RPC服务(即portmap或rpcbind服务)</p><p>3、此时NFS客户端就可以获取到正确的端口,然后就直接与NFS daemon联机存取数据了。</p><p>4、NFS客户端把数据存取成功后,返回给当前访问程序,告知用户存取结果,作为网站用户,我们就完成了一次存取操作。 由于NFS的各项功能都需要想RPC服务注册,所以RPC服务才能获取到NFS服务的各项功能对应的端口、PID、NFS在主机所监听的IP等,NFS客户端才能够通过向RPC服务询问才找到正确的端口。也就是说,NFS需要有RPC服务的协助才能成功对外提供服务。由上面的描述,我们不难推出:无论是NFS客户端还是NFS服务器端,当要使用NFS时,都需要首先启动RPC服务,然后在启动NFS服务,客户端可以不启动NFS服务。</p><h2 id="安装配置NFS服务器"><a href="#安装配置NFS服务器" class="headerlink" title="安装配置NFS服务器"></a>安装配置NFS服务器</h2><h3 id="使用docker容器配置NFS服务器"><a href="#使用docker容器配置NFS服务器" class="headerlink" title="使用docker容器配置NFS服务器"></a>使用docker容器配置NFS服务器</h3><p>1) 启动centos容器并进入<br>docker run -d –privileged centos:v1 /usr/sbin/init<br>2) 在centos容器中使用yum方式安装nfs-utils<br><code>yum install nfs-utils</code><br>3) 保存容器为镜像</p><p>#docker commit 容器ID nfs<br>4) 启动容器nfs,设定地址为172.18.0.120</p><p>#docker run -d –privileged –net cluster –ip 172.18.0.120 –name nfs nfs /usr/sbin/init</p><p>5) 启动nfs服务,查看监听端口<br><code>systemctl start nfs-server</code></p><p>7) 新建共享目录/var/www/share,设置权限为777</p><p>8) 编辑/etc/exports文件<br><code>/var/www/share 172.18.0.*(rw,sync)</code></p><p>9) 导出nfs共享目录<br><code>exportfs -rv</code><br>10) 查看nfs上的共享目录</p><p>#showmount -e IP地址<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@c90e05748250 /]<span class="comment"># showmount -e 172.18.0.1</span></span><br><span class="line">Export list <span class="keyword">for</span> 172.18.0.1:</span><br><span class="line">/var/www/share 172.18.0.*</span><br></pre></td></tr></table></figure></p><h3 id="使用宿主机配置NFS服务器"><a href="#使用宿主机配置NFS服务器" class="headerlink" title="使用宿主机配置NFS服务器"></a>使用宿主机配置NFS服务器</h3><p>1) <code>yum install nfs-utils</code> //在宿主机安装nfs</p><p>2) 查看nfs配置文件<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">more /etc/nfs.onf </span><br><span class="line">more /etc/nfsmount.conf</span><br></pre></td></tr></table></figure></p><p>3) 启动nfs服务,查看监听端口</p><p><code>systemctl start nfs-server</code></p><p>4) 新建共享目录/var/www/share,设置权限为777</p><p>5) 编辑/etc/exports文件<br><code>/var/www/share 172.18.0.*(rw,sync)</code></p><p>6) 导出nfs共享目录<br><code>#exportfs -rv</code></p><p>7) 查看nfs上的共享目录</p><p>#showmount -e IP地址<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">showmount -e 172.18.0.1</span><br><span class="line">Export list <span class="keyword">for</span> 172.18.0.1:</span><br><span class="line">/var/www/share 172.18.0.*</span><br></pre></td></tr></table></figure></p><h3 id="启用APP1和APP2两个容器,挂载共享目录"><a href="#启用APP1和APP2两个容器,挂载共享目录" class="headerlink" title="启用APP1和APP2两个容器,挂载共享目录"></a>启用APP1和APP2两个容器,挂载共享目录</h3><p>1) 启动容器APP1,设定地址为172.18.0.111<br>docker run -d –privileged –net cluster –ip 172.18.0.111 –name APP1 php-apache /usr/sbin/init<br>2) 启动容器APP2,设定地址为172.18.0.112<br>docker run -d –privileged –net cluster –ip 172.18.0.112 –name APP2 php-apache /usr/sbin/init<br>3) <code>yum install nfs-utils</code> //进入容器并安装nfs<br>4) #showmount -e 172.18.0.1 //在APP1查看nfs上的共享目录<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">showmount -e 172.18.0.1</span><br><span class="line">Export list <span class="keyword">for</span> 172.18.0.1:</span><br><span class="line">/var/www/share 172.18.0.*</span><br></pre></td></tr></table></figure></p><p>5) 共享目录挂在到本地目录<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">mkdir /var/www/share</span><br><span class="line">mount 172.18.0.1:/var/www/share /var/www/share</span><br></pre></td></tr></table></figure></p><p>6) 在APP1的/var/www/share上读写文件,在nfs上查看</p><p>7) APP2按以上步骤配置</p><h2 id="配置nginx1、APP1实现动静分离"><a href="#配置nginx1、APP1实现动静分离" class="headerlink" title="配置nginx1、APP1实现动静分离"></a>配置nginx1、APP1实现动静分离</h2><h3 id="在APP1上编写PHP脚本,上传资源文件"><a href="#在APP1上编写PHP脚本,上传资源文件" class="headerlink" title="在APP1上编写PHP脚本,上传资源文件"></a>在APP1上编写PHP脚本,上传资源文件</h3><p>1) vim /var/www/index.php //在APP1上编辑php文件<br><figure class="highlight php"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta"><?php</span></span><br><span class="line"><span class="function"><span class="keyword">function</span> <span class="title">serverIp</span><span class="params">()</span></span>{ <span class="comment">//获取服务器IP地址</span></span><br><span class="line"> <span class="keyword">if</span>(<span class="keyword">isset</span>($_SERVER)){</span><br><span class="line"> <span class="keyword">if</span>($_SERVER[<span class="string">'SERVER_ADDR'</span>]){</span><br><span class="line"> $server_ip=$_SERVER[<span class="string">'SERVER_ADDR'</span>];</span><br><span class="line"> }<span class="keyword">else</span>{</span><br><span class="line"> $server_ip=$_SERVER[<span class="string">'LOCAL_ADDR'</span>];</span><br><span class="line"> }</span><br><span class="line"> }<span class="keyword">else</span>{</span><br><span class="line"> $server_ip = getenv(<span class="string">'SERVER_ADDR'</span>);</span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">return</span> $server_ip;</span><br><span class="line"> }</span><br><span class="line"> <span class="meta">?></span></span><br><span class="line"><!doctype html></span><br><span class="line"><html></span><br><span class="line"><head></span><br><span class="line"><meta charset=<span class="string">"utf-8"</span>></span><br><span class="line"><title>动静分离测试</title></span><br><span class="line"><link rel=<span class="string">"stylesheet"</span> type=<span class="string">"text/css"</span> href=<span class="string">"share/banner.css"</span>></span><br><span class="line"><script type=<span class="string">"text/javascript"</span> src=<span class="string">"share/jquery-1.7.2.min.js"</span>></script></span><br><span class="line"></head></span><br><span class="line"><body></span><br><span class="line"> <div class="banner"></span><br><span class="line"> <ul></span><br><span class="line"> <li><img src=<span class="string">"share/banner_02.jpg"</span> /></li></span><br><span class="line"> <li><img src=<span class="string">"share/banner_01.gif"</span> /></li></span><br><span class="line"> </ul></span><br><span class="line"> </div></span><br><span class="line"> <div class="main_list"></span><br><span class="line"> <ul></span><br><span class="line"> <li><a href=<span class="string">"#"</span>>动静分离测试...</a></li></span><br><span class="line"> <li><a href=<span class="string">"#"</span>>动静分离测试...</a></li></span><br><span class="line"> </ul> </span><br><span class="line"> </div> </span><br><span class="line"> <span><span class="meta"><?php</span> <span class="keyword">echo</span> serverIp(); <span class="meta">?></span></span> </span><br><span class="line"></body></span><br><span class="line"></html></span><br></pre></td></tr></table></figure></p><p>4) 把图片资源文件上传到APP1服务器的 <code>/var/www/share</code>目录</p><p>5) 在宿主机nfs服务器的 /var/www/share目录中检查文件是否存在</p><p>6) 在宿主机使用curl访问<a href="http://172.18.0.111/index.php" target="_blank" rel="noopener">http://172.18.0.111/index.php</a></p><p><img src="https://img-blog.csdnimg.cn/20200430185740896.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="配置nginx反向代理,访问APP1"><a href="#配置nginx反向代理,访问APP1" class="headerlink" title="配置nginx反向代理,访问APP1"></a>配置nginx反向代理,访问APP1</h3><p>1) 启动容器nginx1,设定地址为172.18.0.11,把80端口映射到宿主机8080<br>docker run -d –privileged –net cluster –ip 172.18.0.11 -p 8080:80 –name nginx1 nginx-keep /usr/sbin/init<br>2) 在nginx1上编辑/etc/nginx/nginx.conf,重启nginx服务<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name localhost;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://172.18.0.111;</span><br><span class="line"> }</span><br></pre></td></tr></table></figure></p><p>3) 在主机使用浏览器访问<a href="http://192.168.*.100/index.php" target="_blank" rel="noopener">http://192.168.*.100/index.php</a> </p><p>这里肯定显示不了图片 因为网站的根目录为<code>/var/www/html</code>而share目录在<code>/var/www</code>下</p><p><img src="https://img-blog.csdnimg.cn/20200430185523405.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="配置nginx反向代理,宿主机nginx,支持动静分离"><a href="#配置nginx反向代理,宿主机nginx,支持动静分离" class="headerlink" title="配置nginx反向代理,宿主机nginx,支持动静分离"></a>配置nginx反向代理,宿主机nginx,支持动静分离</h3><p>1) 在nfs宿主机编辑/etc/nginx/conf.d/ default.conf,启用nginx服务<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name file.test.com;</span><br><span class="line"> location / {</span><br><span class="line"> root /var/www;</span><br><span class="line"> index index.html index.htm;</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>2) 在nginx1上编辑/etc/nginx/nginx.conf,重启nginx服务<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name localhost;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://172.18.0.111;</span><br><span class="line"> }</span><br><span class="line"> location /share {</span><br><span class="line"> proxy_pass http://172.18.0.1/share;</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>3) 在主机使用浏览器访问<a href="http://192.168.*.100/index.php" target="_blank" rel="noopener">http://192.168.*.100/index.php</a><br><img src="https://img-blog.csdnimg.cn/20200430185822475.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="配置nginx1、APP1、APP2、宿主机nfs和nginx,支持负载均衡动静分离"><a href="#配置nginx1、APP1、APP2、宿主机nfs和nginx,支持负载均衡动静分离" class="headerlink" title="配置nginx1、APP1、APP2、宿主机nfs和nginx,支持负载均衡动静分离"></a>配置nginx1、APP1、APP2、宿主机nfs和nginx,支持负载均衡动静分离</h3><p>1) 仿照步骤1,在APP2上编写PHP脚本,上传资源文件<br>3) 在nginx1上编辑/etc/nginx/nginx.conf,重启nginx服务<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name localhost;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://APP;</span><br><span class="line"> }</span><br><span class="line"> location /share {</span><br><span class="line"> proxy_pass http://172.18.0.1/share;</span><br><span class="line"> }</span><br><span class="line">upstream APP {</span><br><span class="line"> server 172.18.0.111;</span><br><span class="line"> server 172.18.0.112;</span><br><span class="line">}</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>4) 在主机使用浏览器访问<a href="http://192.168.*.100/index.php" target="_blank" rel="noopener">http://192.168.*.100/index.php</a><br><img src="https://img-blog.csdnimg.cn/20200430185827671.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p>]]></content>
<summary type="html">
<h2 id="NFS介绍"><a href="#NFS介绍" class="headerlink" title="NFS介绍"></a>NFS介绍</h2><p>NFS 是Network File System的缩写,即网络文件系统。一种使用于分散式文件系统的协定,由Sun公司
</summary>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/categories/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/tags/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="Docker" scheme="https://plutoacharon.github.io/tags/Docker/"/>
</entry>
<entry>
<title>Docker 三剑客之Machine,Compose,Swarm</title>
<link href="https://plutoacharon.github.io/2020/05/12/Docker-%E4%B8%89%E5%89%91%E5%AE%A2%E4%B9%8BMachine%EF%BC%8CCompose%EF%BC%8CSwarm/"/>
<id>https://plutoacharon.github.io/2020/05/12/Docker-三剑客之Machine,Compose,Swarm/</id>
<published>2020-05-12T14:40:08.000Z</published>
<updated>2020-05-12T14:40:22.534Z</updated>
<content type="html"><![CDATA[<h1 id="Docker三剑客"><a href="#Docker三剑客" class="headerlink" title="Docker三剑客"></a>Docker三剑客</h1><p>为了把容器化技术的优点发挥到极致,docker公司先后推出了三大技术</p><ul><li>docker-machine</li><li>docker-compose</li><li>docker-swarm<br>它们可以说是几乎实现了容器化技术中所有可能需要的底层技术手段。<br><img src="https://img-blog.csdnimg.cn/20200426145546753.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70#pic_center" alt="在这里插入图片描述"><blockquote><p>图源: <a href="https://xiaoxiami.gitbook.io/docker/docker-ji-qun" target="_blank" rel="noopener">https://xiaoxiami.gitbook.io/docker/docker-ji-qun</a></p></blockquote></li><li>docker-machine - 提供容器服务</li><li>docker-compose - 提供脚本执行服务,不用在像以前把容器的启动命令写的非常的长,用compose编写脚本就能简化容器的启动</li><li>几条简单指令就可以创建一个docker集群,最终实现分布式的服务<h2 id="Docker-三剑客之-Machine"><a href="#Docker-三剑客之-Machine" class="headerlink" title="Docker 三剑客之 Machine"></a>Docker 三剑客之 Machine</h2>Docker Machine 是 Docker 官方三剑客项目之一 ,负责使用 Docker 容器的第一步 :在多<br>种平台上快速安装和维护 Docker 运行环境 。 它支持多种平 台 ,让用户可以在很短时间内在<br>本地或云环境中搭建一套 Docker 主机集群。</li></ul><h3 id="Machine-简介"><a href="#Machine-简介" class="headerlink" title="Machine 简介"></a>Machine 简介</h3><p>Machine 项目是 Docker 官方的开源项目 ,负责实现对 Docker 运行环境进行安装和管理,特别在管理多个 Docker 环境时,使用 Machine 要比手动管理高效得多。</p><p>Machine 的定位是“在本地或者云环境中创建 Docker 主机” </p><p>其代码在<code>https://github.com/docker/machine</code> 上开源,遵循 Apache-2.0 许可</p><p>Machine 项目主要由 Go 语言编写,用户可以在本地任意指定由 Machine 管理的 Docker主机,并对其进行操作。</p><p>其基本功能包括:</p><ul><li>在指定节点或平台上安装 Docker 引擎,配置其为可使用的 Docker 环境;</li><li>集中管理(包括启动 、查看等)所安装 的Docker 环境。</li></ul><p>Machine 连接不同类型的操作平台是通过对应驱动来实现 的,目前已经集成了包括AWS 、 IBM 、 Google ,以及 OpenStack 、 VirtualBox 、 vSphere 等多种云平台的支持。</p><h3 id="安装"><a href="#安装" class="headerlink" title="安装"></a>安装</h3><p>在 Linux 平台上的安装十分简单,推荐从官方 Release 库<code>https://github.corn/docker/machine/releases</code> 直接下载编译好的二进制文件即可</p><p>在 Linux 64 位系统上直接下载对应的二进制包<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">$ sudo curl -L https://github.com/docker/machine/releases/download/v0.13.0/docker-machine- <span class="string">' uname -s'</span>-<span class="string">'uname -m '</span> > docker-machine </span><br><span class="line">$ sudo mv docker-machine /usr/ <span class="built_in">local</span>/bin/docker-machine </span><br><span class="line">$ sudo chmod +x /usr/<span class="built_in">local</span>/bin/docker-machine</span><br><span class="line">安装完成后,查看版本信息,验证运行正常:</span><br><span class="line">$ docker-machine -v </span><br><span class="line">docker-machine version 0.13.0</span><br></pre></td></tr></table></figure></p><p>当要对多个 Docker 主机环境进行安装、配置和管理时,采用 Docker Machine 的方式将远比手动方式<br>快捷。 不仅提高了操作速度,更通过批量统一的管理减少了出错的可能。 尤其在大规模集群和云平台环境中推荐使用</p><h2 id="Docker-三剑客之-Compose"><a href="#Docker-三剑客之-Compose" class="headerlink" title="Docker 三剑客之 Compose"></a>Docker 三剑客之 Compose</h2><p>编排( Orchestration )功能,是复杂系统是否具有灵活可操作性的关键。 特别在 Docker应用场景中,编排意味着用户可以灵活地对各种容器资源实现定义和管理。</p><p>Compose 作为 Docker 官方编排工具,其重要性不言而喻,它可以让用户通过编写一个简单的模板文件,快速地创建和管理基于 Docker 容器的应用集群。</p><h3 id="Compose-简介"><a href="#Compose-简介" class="headerlink" title="Compose 简介"></a>Compose 简介</h3><p>Compose 项目是 Docker 官方的开源项目,负责实现对基于 Docker 容器的多应用服务的快速编排。 从功能上看,跟 Open Stack 中的 Heat 十分类似。 其代码目前在 <code>https://github .com/docker/compose</code> 巳上开源 。</p><p>Compose 定位是“定义和运行多个 Docker 容器的应用”,其前身是开源项目<code>Fig</code> ,目前仍然兼容 Fig 格式的模板文件。</p><p>在日常工作中,经常会碰到需要多个容器相互配合来完成某项任务的情况。 例如要实现一个 Web 项目,除了 Web 服务容器本身,往往还需要再加上后端的数据库服务容器,甚至还包括前端的负载均衡容器等。</p><p>Compose 恰好满足了这样的需求。 它允许用户通过一个单独的 <code>docker-compose.yml</code>模板文件( YAML 格式)来定义一组相关联的应用容器为一个服务樵( stack ) </p><p>Compose 中有几个重要的概念:</p><ul><li><p>任务( task ) : 一个容器被称为一个任务。 任务拥有独一无二的 ID ,在同一个服务中的多个任务序号依次递增 。</p></li><li><p>服务( service ):某个相同应用镜像的容器副本集合,一个服务可以横向扩展为多个容器实例 。</p></li><li><p>服务枝 ( stack ) :由 多个服务组成 ,相互配合完成特定业务 , 如 Web 应用服务、数据<br>库服务共同构成 Web 服务钱 ,一般由一个 docker-cornpose.yml 文件定义。</p></li></ul><p>Compose 的默认管理对象是服务钱,通过子命令对栈中的多个服务进行便捷的生命周期管理。</p><p>Compose 项目由 Python 编写 ,实现上调用了 Docker 服务提供的 API 来对容器进行管理。</p><p>因此,只要所操作的平台支持 Docker API,就可以在其上利用 Compose 来进行编排管理。</p><h3 id="Compose安装"><a href="#Compose安装" class="headerlink" title="Compose安装"></a>Compose安装</h3><p>二进制包安装</p><p>这些发布的二进制包可以在<code>https://github.com/docker/compose/releases</code> 页面找到 </p><p>将这些二进制文件下载后直接放到执行路径下,并添加执行权限即可。<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">$ sudo curl -L https : //github.com/docker/compose/releases/download/1.19.0/docker-compose- ’ uname -s ’- ’ uname -m’ > / usr/ <span class="built_in">local</span> / bin/ docker-compose </span><br><span class="line">$ sudo chmod a+x /usr/<span class="built_in">local</span>/bin/docker-cornpose</span><br><span class="line">可以使用 docker-compose version 命令来查看版本信息,以测试是否安装成功:</span><br><span class="line"></span><br><span class="line">$ docker-compose version</span><br><span class="line">docker compose version 1.19.0</span><br><span class="line">docker-py version : 2.7.0 </span><br><span class="line">CPython version : 2.7.12 </span><br><span class="line">OpenSSL version : OpenSSL l.0.2g</span><br></pre></td></tr></table></figure></p><p>在 Docker 三剑客中, Compose 掌管运行时的编排能力,位置十分关键。 使用 Compose模板文件,用户可以编写包括若干服务的一个模板文件快速启动服务栈;如果分发给他人,也可快速创建一套相同的服务栈。</p><h2 id="Docker-三剑客之-Swarm"><a href="#Docker-三剑客之-Swarm" class="headerlink" title="Docker 三剑客之 Swarm"></a>Docker 三剑客之 Swarm</h2><p>Docker Swarm 是 Docker 官方三剑客项目之一,提供 Docker 容器集群服务,是 Docker官方对容器云生态进行支持的核心方案。 使用它,用户可以将多个 Docker 主机抽象为大规模的虚拟 Docker 服务,快速打造一套容器云平台</p><h3 id="Swarm-简介"><a href="#Swarm-简介" class="headerlink" title="Swarm 简介"></a>Swarm 简介</h3><p>Docker Swarm 是 Docker 公司推出的官方容器集群平台 , 基于 Go 语言实现,代码开源在 <code>https:// github.com/ docker/swarm</code> </p><p>目前,包括 Rackspace 等平台都采用了 Swarm ,用户也很容易在 AWS 等公有云平台使用 Swarm 。</p><p>Swarm 的前身是 Beam 项目和 libswarm 项目,首个正式版本( Swarm Vl )在 2014 年 12 月初发布 。 为了提高可扩展性, 2016 年 2 月对架构进行重新设计,推出了 V2 版本,支持超过 lK 个节点 。最新的 Docker Engine ( 1.12 后)已经集成SwarmKit 内嵌了对 Swarm 模式的支持。</p><p>作为容器集群管理器, Swarm 最大的优势之一就是原生支持 Docker API ,给用户使用带来极大的便利 。 各种基于标准 A凹的工具比如 Compose 、 Docker SDK 、各种管理软件, 甚至Docker 本身等都可以很容易的与 Swarm 进行集成。 这大大方便了用户将原先基于单节点的系统移植到 Swarm 上。 同时 Swarm 内置了对 Docker 网络插件的支持,用户可以很容易地部署跨主机的容器集群服务。</p><p>Swarm 也采用了典型的“主从”结构<br><img src="https://img-blog.csdnimg.cn/20200426163912544.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="“主从”结构"></p><p>通过 Raft 协议来在多个管理节点( Manager )中实现共识。 工作节点( Worker )上运行 agent 接受管理节点的统一管理和任<br>务分配。 用户提交服务请求只需要发给管理节点即可,管理节点会按照调度策略在集群中分配节点来运行服务相关的任务</p><p>在 Swarm V2 中,集群中会自动通过 Raft 协议分布式选举出 Manager 节点,无须额外的发现服务支持,避免了单点瓶颈。 同时, V2 中内置了基于 DNS 的负载均衡和对外部负载均衡机制的集成支持。</p><h3 id="Swarm-基本概念"><a href="#Swarm-基本概念" class="headerlink" title="Swarm 基本概念"></a>Swarm 基本概念</h3><p>Swarm 在 Docker 基础上扩展了支持多节点的能力,同时兼容了大部分的 Docker 操作。Swarm 中以集群为单位进行管理,支持服务层面的操作。</p><h4 id="1-Swarm-集群"><a href="#1-Swarm-集群" class="headerlink" title="1. Swarm 集群"></a>1. Swarm 集群</h4><p>Swarm 集群( Cluster )为一组被统一管理起来的 Docker 主机。 集群是 Swarm 所管理的对象。 这些主机通过 Docker 引擎的 Swarm 模式相互沟通,其中部分主机可能作为管理节点(manager)响应外部的管理请求,其他主机作为工作节点( worker )来实际运行 Docker 容器。当然,同一个主机也可以即作为管理节点,同时作为工作节点 。</p><p>当用户使用 Swarm 集群时,首先定义一个服务(指定状态、复制个数、网络、存储 、 暴露端- 等),然后通过管理节点发出启动服务的指令,管理节点随后会按照指定的服务规则进行调度,在集群中启动起来整个服务,并确保它正常运行。</p><h4 id="2-节点"><a href="#2-节点" class="headerlink" title="2. 节点"></a>2. 节点</h4><p>节点(Node )是 Swarm 集群的最小资源单位。 每个节点实际上都是一台 Docker 主机。<br>Swarm 集群中节点分为两种:</p><ul><li>管理节点( manager node ): 负责响应外部对集群的操作请求,并维持集群中资源,分发任务给工作节点 。 同时,多个管理节点之间通过 Raft 协议构成共识。 一般推荐每个集群设置 5 个或 7 个管理节点;</li><li>工作节点( worker node ):负责执行管理节点安排的具体任务。 默认情况下,管理节点自身也同时是工作节点 。 每个工作节点上运行代理( agent )来汇报任务完成情况。用户可以通过 docker node promote 命令来提升一个工作节点为管理节点;或者通过docker node demote 命令来将一个管理节点降级为工作节点。<h4 id="3-服务"><a href="#3-服务" class="headerlink" title="3. 服务"></a>3. 服务</h4>服务( Service)是 Docker 支持复杂多容器协作场景的利器。一个服务可以由若干个任务组成,每个任务为某个具体的应用。 服务还包括对应的存储 、 网络 、 端- 映射、副本个数 、 访问配置 、 升级配置等附加参数。一般来说,服务需要面向特定的场景,例如一个典型的 Web 服务可能包括前端应用 、 后<br>端应用,以及数据库等。 这些应用都属于该服务的管理范畴。</li></ul><p>Swarm 集群中服务类型也分为两种(可以通过-mode 指定) :</p><ul><li>复制服务( replicated services )模式 : 默认模式,每个任务在集群中会存在若干副本,<br>这些副本会被管理节点按照调度策略分发到集群中的工作节点上。 此模式下可以使<br>用-replicas 参数设置副本数量 ;</li><li>全局服务( global services )模式 : 调度器将在每个可用节点都执行一个相同的任务。<br>该模式适合运行节点的检查,如监控应用等。<h4 id="4-任务"><a href="#4-任务" class="headerlink" title="4. 任务"></a>4. 任务</h4>任务是 Swarm 集群中最小的调度单位,即一个指定的应用容器。 例如仅仅运行前端业务的前端容器。 任务从生命周期上将可能处于创建( NEW ) 、 等待( PENDING ) 、 分配( ASSIGNED ) 、 接受( ACCEPTED ) 、 准备( PREPARING )、开始( STARTING ) 、 运行 (RUNING) 、 完成(COMPLETE )、失败(FAILED ) 、 关闭(SHUTDOWN) 、 拒绝(PEJECTED ) 、孤立( ORPHANED )等不同状态 。</li></ul><p>Swarm 集群中的管理节点会按照调度要求将任务分配到工作节点上。 例如指定副本为 2时,可能会被分配到两个不同的工作节点上。一旦当某个任务被分配到一个工作节点,将无法被转移到另外的工作节点,即 Swarm 中的任务不支持迁移。</p><h4 id="5-服务的外部访问"><a href="#5-服务的外部访问" class="headerlink" title="5 . 服务的外部访问"></a>5 . 服务的外部访问</h4><p>Swarm 集群中的服务要被集群外部访问,必须要能允许任务的响应端口映射出来。Swarm 中支持入口负载均衡(ingress load balancing )的映射模式。 该模式下,每个服务都会被分配一个公开端口( PublishedPort ),该端口在集群中任意节点上都可以访问到,并被保留给该服务。</p><p>当有请求发送到任意节点的公开端- 时,该节点若并没有实际执行服务相关的容器,则会通过路由机制将请求转发给实际执行了服务容器的工作节点 。</p><p>通过使用 Swarm ,用户可以将若干 Docker 主机节点组成的集群当作一个大的虚拟 Docker 主机使用 。 并且,原先基于单机的Docker 应用,可以无缝地迁移到 Swarm 上来。 通过使用服务, Swarm 集群可以支持多个应用构建的复杂业务,并很容易对其进行升级等操作 。</p><p>在生产环境中, Swarm 的管理节点要考虑高可用性和安全保护,一方面多个管理节点应该分配到不同的容灾区域,另一方面服务节点应该配合数字证书等手段限制访问 。Swarm 功能已 经被无缝嵌入Docker 1.12+版本中,用户今后可 以 直接使用 Docker命令来完成相关功能的配置,对 Swarm 集群的管理更加简便。</p>]]></content>
<summary type="html">
<h1 id="Docker三剑客"><a href="#Docker三剑客" class="headerlink" title="Docker三剑客"></a>Docker三剑客</h1><p>为了把容器化技术的优点发挥到极致,docker公司先后推出了三大技术</p>
<ul
</summary>
<category term="Docker" scheme="https://plutoacharon.github.io/categories/Docker/"/>
<category term="Dokcer" scheme="https://plutoacharon.github.io/tags/Dokcer/"/>
</entry>
<entry>
<title>Kubernetes(K8s)入门到实践(八)----Kubernetes1.15.1 部署Prometheus</title>
<link href="https://plutoacharon.github.io/2020/05/12/Kubernetes-K8s-%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E8%B7%B5-%E5%85%AB-Kubernetes1-15-1-%E9%83%A8%E7%BD%B2Prometheus/"/>
<id>https://plutoacharon.github.io/2020/05/12/Kubernetes-K8s-入门到实践-八-Kubernetes1-15-1-部署Prometheus/</id>
<published>2020-05-12T14:39:38.000Z</published>
<updated>2020-05-12T14:39:48.536Z</updated>
<content type="html"><![CDATA[<h2 id="Prometheus介绍"><a href="#Prometheus介绍" class="headerlink" title="Prometheus介绍"></a>Prometheus介绍</h2><p>随着容器技术的迅速发展,Kubernetes 已然成为大家追捧的容器集群管理系统。Prometheus 作为生态圈 Cloud Native Computing Foundation(简称:CNCF)中的重要一员,其活跃度仅次于 Kubernetes, 现已广泛用于 Kubernetes 集群的监控系统中。</p><p>本文将简要介绍 Prometheus 的组成和相关概念,并实例演示 Prometheus 的安装,配置及使用。</p><h3 id="Prometheus的特点:"><a href="#Prometheus的特点:" class="headerlink" title="Prometheus的特点:"></a>Prometheus的特点:</h3><ul><li>多维度数据模型。</li><li>灵活的查询语言。</li><li>不依赖分布式存储,单个服务器节点是自主的。</li><li>通过基于HTTP的pull方式采集时序数据。</li><li>可以通过中间网关进行时序列数据推送。</li><li>通过服务发现或者静态配置来发现目标服务对象。</li><li>支持多种多样的图表和界面展示,比如Grafana等</li></ul><p><strong>官方架构图</strong><br>官方网站:<a href="https://prometheus.io/" target="_blank" rel="noopener">https://prometheus.io/</a><br><img src="https://img-blog.csdnimg.cn/20200425101507273.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200425101711164.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70#pic_center" alt="在这里插入图片描述"></p><p>Prometheus 生态圈中包含了多个组件,其中许多组件是可选的:</p><ul><li>Prometheus Server: 用于收集和存储时间序列数据。</li><li>Client Library: 客户端库,为需要监控的服务生成相应的 metrics 并暴露给 Prometheus server。当 Prometheus server 来 pull 时,直接返回实时状态的 metrics。</li><li>Push Gateway: 主要用于短期的 jobs。由于这类 jobs 存在时间较短,可能在 Prometheus 来 pull 之前就消失了。为此,这次 jobs 可以直接向 Prometheus server 端推送它们的 metrics。这种方式主要用于服务层面的 metrics,对于机器层面的 metrices,需要使用 node exporter。</li><li>Exporters: 用于暴露已有的第三方服务的 metrics 给 Prometheus。</li><li>Alertmanager: 从 Prometheus server 端接收到 alerts 后,会进行去除重复数据,分组,并路由到对收的接受方式,发出报警。常见的接收方式有:电子邮件,pagerduty,OpsGenie, webhook 等一些其他的工具。</li></ul><h3 id="Prometheus的基本原理"><a href="#Prometheus的基本原理" class="headerlink" title="Prometheus的基本原理"></a>Prometheus的基本原理</h3><p>Prometheus的基本原理是通过HTTP协议周期性抓取被监控组件的状态,任意组件只要提供对应的HTTP接口就可以接入监控。不需要任何SDK或者其他的集成过程。这样做非常适合做虚拟化环境监控系统,比如VM、Docker、Kubernetes等。输出被监控组件信息的HTTP接口被叫做exporter 。目前互联网公司常用的组件大部分都有exporter可以直接使用,比如Varnish、Haproxy、Nginx、MySQL、Linux系统信息(包括磁盘、内存、CPU、网络等等)。</p><h2 id="Prometheus部署"><a href="#Prometheus部署" class="headerlink" title="Prometheus部署"></a>Prometheus部署</h2><h3 id="1-修改-grafana-service-yaml-文件"><a href="#1-修改-grafana-service-yaml-文件" class="headerlink" title="1. 修改 grafana-service.yaml 文件"></a>1. 修改 grafana-service.yaml 文件</h3><p>使用git下载Prometheus项目<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 plugin]<span class="comment"># mkdir prometheus</span></span><br><span class="line">[root@k8s-master01 plugin]<span class="comment"># cd prometheus/</span></span><br><span class="line">[root@k8s-master01 prometheus]<span class="comment"># git clone https://github.com/coreos/kube-prometheus.git</span></span><br><span class="line">正克隆到 <span class="string">'kube-prometheus'</span>...</span><br><span class="line">remote: Enumerating objects: 4, <span class="keyword">done</span>.</span><br><span class="line">remote: Counting objects: 100% (4/4), <span class="keyword">done</span>.</span><br><span class="line">remote: Compressing objects: 100% (4/4), <span class="keyword">done</span>.</span><br><span class="line">remote: Total 8171 (delta 0), reused 1 (delta 0), pack-reused 8167</span><br><span class="line">接收对象中: 100% (8171/8171), 4.56 MiB | 57.00 KiB/s, <span class="keyword">done</span>.</span><br><span class="line">处理 delta 中: 100% (4936/4936), <span class="keyword">done</span>.</span><br><span class="line">[root@k8s-master01 prometheus]<span class="comment"># cd kube-prometheus/manifests/</span></span><br><span class="line">[root@k8s-master01 manifests]<span class="comment"># vim grafana-service.yaml</span></span><br></pre></td></tr></table></figure></p><p>使用 nodepode 方式访问 grafana:<br><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Service</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> app:</span> <span class="string">grafana</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">grafana</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">monitoring</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr"> type:</span> <span class="string">NodePort</span> <span class="comment"># 添加</span></span><br><span class="line"><span class="attr"> ports:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">http</span></span><br><span class="line"><span class="attr"> port:</span> <span class="number">3000</span></span><br><span class="line"><span class="attr"> targetPort:</span> <span class="string">http</span></span><br><span class="line"><span class="attr"> nodePort:</span> <span class="number">30100</span> <span class="comment"># 添加</span></span><br><span class="line"><span class="attr"> selector:</span></span><br><span class="line"><span class="attr"> app:</span> <span class="string">grafana</span></span><br></pre></td></tr></table></figure></p><h3 id="2-修改-修改-prometheus-service-yaml"><a href="#2-修改-修改-prometheus-service-yaml" class="headerlink" title="2. 修改 修改 prometheus-service.yaml"></a>2. 修改 修改 prometheus-service.yaml</h3><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Service</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> prometheus:</span> <span class="string">k8s</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">prometheus-k8s</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">monitoring</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr"> type:</span> <span class="string">NodePort</span> <span class="comment"># 添加</span></span><br><span class="line"><span class="attr"> ports:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">web</span></span><br><span class="line"><span class="attr"> port:</span> <span class="number">9090</span></span><br><span class="line"><span class="attr"> targetPort:</span> <span class="string">web</span></span><br><span class="line"><span class="attr"> nodePort:</span> <span class="number">30200</span> <span class="comment"># 添加</span></span><br><span class="line"><span class="attr"> selector:</span></span><br><span class="line"><span class="attr"> app:</span> <span class="string">prometheus</span></span><br><span class="line"><span class="attr"> prometheus:</span> <span class="string">k8s</span></span><br><span class="line"><span class="attr"> sessionAffinity:</span> <span class="string">ClientIP</span></span><br></pre></td></tr></table></figure><h3 id="3-修改alertmanager-service-yaml"><a href="#3-修改alertmanager-service-yaml" class="headerlink" title="3. 修改alertmanager-service.yaml"></a>3. 修改alertmanager-service.yaml</h3><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Service</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> alertmanager:</span> <span class="string">main</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">alertmanager-main</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">monitoring</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr"> type:</span> <span class="string">NodePort</span> <span class="comment"># 添加</span></span><br><span class="line"><span class="attr"> ports:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">web</span></span><br><span class="line"><span class="attr"> port:</span> <span class="number">9093</span></span><br><span class="line"><span class="attr"> targetPort:</span> <span class="string">web</span></span><br><span class="line"><span class="attr"> nodePort:</span> <span class="number">30300</span> <span class="comment"># 添加</span></span><br><span class="line"><span class="attr"> selector:</span></span><br><span class="line"><span class="attr"> alertmanager:</span> <span class="string">main</span></span><br><span class="line"><span class="attr"> app:</span> <span class="string">alertmanager</span></span><br><span class="line"><span class="attr"> sessionAffinity:</span> <span class="string">ClientIP</span></span><br></pre></td></tr></table></figure><h3 id="4-kubectl-apply-部署"><a href="#4-kubectl-apply-部署" class="headerlink" title="4. kubectl apply 部署"></a>4. kubectl apply 部署</h3><p>进入目录<code>kube-prometheus</code>执行<code>kubectl apply -f manifests/</code><br>报错<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">unable to recognize <span class="string">"../manifests/alertmanager-alertmanager.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"Alertmanager"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/alertmanager-serviceMonitor.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/grafana-serviceMonitor.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/kube-state-metrics-serviceMonitor.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/node-exporter-serviceMonitor.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-operator-serviceMonitor.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-prometheus.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"Prometheus"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-rules.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"PrometheusRule"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-serviceMonitor.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-serviceMonitorApiserver.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-serviceMonitorCoreDNS.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-serviceMonitorKubeControllerManager.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-serviceMonitorKubeScheduler.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br><span class="line">unable to recognize <span class="string">"../manifests/prometheus-serviceMonitorKubelet.yaml"</span>: no matches <span class="keyword">for</span> kind <span class="string">"ServiceMonitor"</span> <span class="keyword">in</span> version <span class="string">"monitoring.coreos.com/v1"</span></span><br></pre></td></tr></table></figure></p><p>网上查询得知:<a href="https://github.com/coreos/prometheus-operator/issues/1866" target="_blank" rel="noopener">As the QuickStart mentions, there is a race in Kubernetes that the CRD creation finished but the API is not actually available. You just have to run the command once again.</a> 需要运行多次</p><p>创建成功后查看<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 manifests]<span class="comment"># kubectl get svc -n monitoring</span></span><br><span class="line">NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE</span><br><span class="line">alertmanager-main NodePort 10.102.129.38 <none> 9093:30300/TCP 15s</span><br><span class="line">alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 8s</span><br><span class="line">grafana NodePort 10.103.207.222 <none> 3000:30100/TCP 14s</span><br><span class="line">kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 14s</span><br><span class="line">node-exporter ClusterIP None <none> 9100/TCP 14s</span><br><span class="line">prometheus-adapter ClusterIP 10.104.146.228 <none> 443/TCP 13s</span><br><span class="line">prometheus-k8s NodePort 10.100.247.74 <none> 9090:30200/TCP 12s</span><br><span class="line">prometheus-operator ClusterIP None <none> 8080/TCP 15s</span><br><span class="line">[root@k8s-master01 manifests]<span class="comment"># kubectl get pods -n monitoring</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">alertmanager-main-0 2/2 Running 1 111s</span><br><span class="line">grafana-7dc5f8f9f6-r9w78 1/1 Running 0 117s</span><br><span class="line">kube-state-metrics-5cbd67455c-q5hlh 4/4 Running 0 97s</span><br><span class="line">node-exporter-5bjhk 2/2 Running 0 116s</span><br><span class="line">node-exporter-n84tr 2/2 Running 0 115s</span><br><span class="line">node-exporter-xbz84 2/2 Running 0 115s</span><br><span class="line">prometheus-adapter-668748ddbd-c9ws6 1/1 Running 0 115s</span><br><span class="line">prometheus-k8s-0 3/3 Running 1 101s</span><br><span class="line">prometheus-k8s-1 3/3 Running 1 101s</span><br><span class="line">prometheus-operator-7447bf4dcb-jfmsn 1/1 Running 0 117s</span><br><span class="line">[root@k8s-master01 manifests]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h3 id="访问-prometheusprometheus"><a href="#访问-prometheusprometheus" class="headerlink" title="访问 prometheusprometheus"></a>访问 prometheusprometheus</h3><p>对应的 nodeport 端口为 30200,访问<a href="http://MasterIP:30200" target="_blank" rel="noopener">http://MasterIP:30200</a><br><img src="https://img-blog.csdnimg.cn/20200425121755673.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>通过访问<a href="http://MasterIP:30200/target可以看到" target="_blank" rel="noopener">http://MasterIP:30200/target可以看到</a> prometheus 已经成功连接上了 k8s 的 apiserver<br>节点全部健康<br><img src="https://img-blog.csdnimg.cn/20200425122703522.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>prometheus 的 WEB 界面上提供了基本的查询 K8S 集群中每个 POD 的 CPU 使用情况<br><code>sum by (pod_name)( rate(container_cpu_usage_seconds_total{image!="", pod_name!=""}[1m] ) )</code><br><img src="https://img-blog.csdnimg.cn/20200425130559647.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>上述的查询有出现数据,说明 node-exporter 往 prometheus 中写入数据正常</p><h3 id="访问-grafana查看"><a href="#访问-grafana查看" class="headerlink" title="访问 grafana查看"></a>访问 grafana查看</h3><p>grafana 服务暴露的端口号:<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">kubectl getservice-n monitoring | grep grafana</span><br><span class="line">grafana NodePort 10.107.56.143 <none> 3000:30100/TCP</span><br></pre></td></tr></table></figure></p><p>浏览器访问<a href="http://MasterIP:30100" target="_blank" rel="noopener">http://MasterIP:30100</a><br>用户名密码默认 admin/admin<br><img src="https://img-blog.csdnimg.cn/20200425130803860.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200425131004191.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>查看Kubernetes API server的数据<br><img src="https://img-blog.csdnimg.cn/20200425131017865.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p>]]></content>
<summary type="html">
<h2 id="Prometheus介绍"><a href="#Prometheus介绍" class="headerlink" title="Prometheus介绍"></a>Prometheus介绍</h2><p>随着容器技术的迅速发展,Kubernetes 已然成为大家追
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>Kubernetes(K8s)入门到实践(七)----部署Helm 2.13.1</title>
<link href="https://plutoacharon.github.io/2020/05/12/Kubernetes-K8s-%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E8%B7%B5-%E4%B8%83-%E9%83%A8%E7%BD%B2Helm-2-13-1/"/>
<id>https://plutoacharon.github.io/2020/05/12/Kubernetes-K8s-入门到实践-七-部署Helm-2-13-1/</id>
<published>2020-05-12T14:38:55.000Z</published>
<updated>2020-05-12T14:39:19.644Z</updated>
<content type="html"><![CDATA[<h2 id="什么是-Helm"><a href="#什么是-Helm" class="headerlink" title="什么是 Helm"></a>什么是 Helm</h2><p><a href="https://helm.sh/" target="_blank" rel="noopener">Helm官方网站</a>:The package manager for Kubernetes</p><p>在没使用 helm 之前,向 kubernetes 部署应用,我们要依次部署 deployment、svc 等,步骤较繁琐。况且随着很多项目微服务化,复杂的应用在容器中部署以及管理显得较为复杂。</p><p><code>Helm</code> 通过打包的方式,支持发布的版本管理和控制,很大程度上简化了 Kubernetes 应用的部署和管理Helm 本质就是让 K8s 的应用管理(Deployment,Service 等 ) 可配置,能动态生成,通过动态生成 K8s 资源清单文件(deployment.yaml,service.yaml),然后调用 Kubectl 自动执行 K8s 资源部署</p><p>Helm 是官方提供的类似于 YUM 的包管理器,是部署环境的流程封装。</p><p>Helm 有两个重要的概念:<strong>chart 和releasechart</strong> </p><ul><li>chart 是创建一个应用的信息集合,包括各种 Kubernetes 对象的配置模板、参数定义、依赖关系、文档说明等。chart 是应用部署的自包含逻辑单元。可以将 chart 想象成 apt、yum 中的软件安装包</li><li>release 是 chart 的运行实例,代表了一个正在运行的应用。当 chart 被安装到 Kubernetes 集群,就生成一个 release。chart 能够多次安装到同一个集群,每次安装都是一个 release</li></ul><p>Helm 包含两个组件:<strong>Helm 客户端</strong>和 <strong>Tiller 服务器</strong><br><img src="https://img-blog.csdnimg.cn/20200424194822963.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>Helm 客户端负责 chart 和 release 的创建和管理以及和 Tiller 的交互。</p><p>Tiller 服务器运行在 Kubernetes 集群中,它会处理 Helm 客户端的请求,与 Kubernetes API Server 交互</p><h2 id="Helm-2-13-1-部署"><a href="#Helm-2-13-1-部署" class="headerlink" title="Helm 2.13. 1 部署"></a>Helm 2.13. 1 部署</h2><h3 id="1-下载安装包"><a href="#1-下载安装包" class="headerlink" title="1. 下载安装包"></a>1. 下载安装包</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz</span><br><span class="line">tar -zxvf helm-v2.13.1-linux-amd64.tar.gz</span><br><span class="line"><span class="built_in">cd</span> linux-amd64/</span><br><span class="line">cp helm /usr/<span class="built_in">local</span>/bin/</span><br><span class="line">chmod a+x /usr/<span class="built_in">local</span>/bin/helm</span><br></pre></td></tr></table></figure><h3 id="2-创建-rbac-config-yaml-文件"><a href="#2-创建-rbac-config-yaml-文件" class="headerlink" title="2. 创建 rbac-config.yaml 文件"></a>2. 创建 rbac-config.yaml 文件</h3><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ServiceAccount</span></span><br><span class="line"><span class="attr">metadata:</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">tiller</span> </span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">rbac.authorization.k8s.io/v1beta1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ClusterRoleBinding</span></span><br><span class="line"><span class="attr">metadata:</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">tiller</span></span><br><span class="line"><span class="attr">roleRef:</span> </span><br><span class="line"><span class="attr"> apiGroup:</span> <span class="string">rbac.authorization.k8s.io</span> </span><br><span class="line"><span class="attr"> kind:</span> <span class="string">ClusterRole</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">cluster-admin</span></span><br><span class="line"><span class="attr">subjects:</span> </span><br><span class="line"><span class="attr"> - kind:</span> <span class="string">ServiceAccount</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">tiller</span> </span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br></pre></td></tr></table></figure><p>将yaml文件部署下去后,使用<code>helm init --service-account tiller --skip-refresh</code>命令初始化Heml</p><blockquote><p>如果下载镜像失败 需要自己下载镜像导入到Docker中(三台节点)<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 helm]<span class="comment"># kubectl apply -f rbac-config.yaml </span></span><br><span class="line">serviceaccount/tiller unchanged</span><br><span class="line">clusterrolebinding.rbac.authorization.k8s.io/tiller created</span><br><span class="line">[root@k8s-master01 helm]<span class="comment"># docker load -i helm-tiller.tar </span></span><br><span class="line">3fc64803ca2d: Loading layer [==================================================>] 4.463MB/4.463MB</span><br><span class="line">79395a173ae6: Loading layer [==================================================>] 6.006MB/6.006MB</span><br><span class="line">c33cd2d4c63e: Loading layer [==================================================>] 37.16MB/37.16MB</span><br><span class="line">d727bd750bf2: Loading layer [==================================================>] 36.89MB/36.89MB</span><br><span class="line">Loaded image: gcr.io/kubernetes-helm/tiller:v2.13.1</span><br><span class="line">[root@k8s-master01 helm]<span class="comment"># helm init --service-account tiller --skip-refresh</span></span><br><span class="line">Creating /root/.helm </span><br><span class="line">Creating /root/.helm/repository </span><br><span class="line">Creating /root/.helm/repository/cache </span><br><span class="line">Creating /root/.helm/repository/<span class="built_in">local</span> </span><br><span class="line">Creating /root/.helm/plugins </span><br><span class="line">Creating /root/.helm/starters </span><br><span class="line">Creating /root/.helm/cache/archive </span><br><span class="line">Creating /root/.helm/repository/repositories.yaml </span><br><span class="line">Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com </span><br><span class="line">Adding <span class="built_in">local</span> repo with URL: http://127.0.0.1:8879/charts </span><br><span class="line"><span class="variable">$HELM_HOME</span> has been configured at /root/.helm.</span><br><span class="line"></span><br><span class="line">Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.</span><br><span class="line"></span><br><span class="line">Please note: by default, Tiller is deployed with an insecure <span class="string">'allow unauthenticated users'</span> policy.</span><br><span class="line">To prevent this, run `helm init` with the --tiller-tls-verify flag.</span><br><span class="line">For more information on securing your installation see: https://docs.helm.sh/using_helm/<span class="comment">#securing-your-helm-installation</span></span><br><span class="line">Happy Helming!</span><br><span class="line">root@k8s-master01 helm]<span class="comment"># helm version</span></span><br><span class="line">Client: &version.Version{SemVer:<span class="string">"v2.13.1"</span>, GitCommit:<span class="string">"618447cbf203d147601b4b9bd7f8c37a5d39fbb4"</span>, GitTreeState:<span class="string">"clean"</span>}</span><br><span class="line">Server: &version.Version{SemVer:<span class="string">"v2.13.1"</span>, GitCommit:<span class="string">"618447cbf203d147601b4b9bd7f8c37a5d39fbb4"</span>, GitTreeState:<span class="string">"clean"</span>}</span><br><span class="line">[root@k8s-master01 helm]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p></blockquote>]]></content>
<summary type="html">
<h2 id="什么是-Helm"><a href="#什么是-Helm" class="headerlink" title="什么是 Helm"></a>什么是 Helm</h2><p><a href="https://helm.sh/" target="_blank" rel
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>HA高可用与负载均衡入门到实战(七)---- 基于Docker配置KeepAlive-LVS负载均衡</title>
<link href="https://plutoacharon.github.io/2020/05/05/HA%E9%AB%98%E5%8F%AF%E7%94%A8%E4%B8%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E6%88%98%EF%BC%88%E4%B8%83%EF%BC%89-%E5%9F%BA%E4%BA%8EDocker%E9%85%8D%E7%BD%AEKeepAlive-LVS%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<id>https://plutoacharon.github.io/2020/05/05/HA高可用与负载均衡入门到实战(七)-基于Docker配置KeepAlive-LVS负载均衡/</id>
<published>2020-05-05T13:40:13.000Z</published>
<updated>2020-05-05T13:40:44.959Z</updated>
<content type="html"><![CDATA[<h2 id="实验要求"><a href="#实验要求" class="headerlink" title="实验要求"></a>实验要求</h2><p>1、 安装配置LVS负载均衡<br>2、 安装配置LVS高可用负载均衡</p><p>拓扑图:<br><img src="https://img-blog.csdnimg.cn/20200423164904605.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h2 id="LVS介绍"><a href="#LVS介绍" class="headerlink" title="LVS介绍"></a>LVS介绍</h2><h3 id="负载均衡工作模式"><a href="#负载均衡工作模式" class="headerlink" title="负载均衡工作模式"></a>负载均衡工作模式</h3><h4 id="1-NAT模式"><a href="#1-NAT模式" class="headerlink" title="1. NAT模式"></a>1. NAT模式</h4><p><code>Virtualserver via Network address translation(VS/NAT)</code> 这个是通过网络地址转换的方法来实现调度的。</p><p>首先调度器(LB)接收到客户的请求数据包时(请求的目的IP为VIP),根据调度算法决定将请求发送给哪个后端的真实服务器(RS)。然后调度就把客户端发送的请求数据包的目标IP地址及端口改成后端真实服务器的IP地址(RIP),这样真实服务器(RS)就能够接收到客户的请求数据包了。真实服务器响应完请求后,查看默认路由(NAT模式下我们需要把RS的默认路由设置为LB服务器。)把响应后的数据包发送给LB,LB再接收到响应包后,把包的源地址改成虚拟地址(VIP)然后发送回给客户端。 </p><p><strong>调度过程IP包详细图:</strong><br><img src="https://img-blog.csdnimg.cn/20200423165104167.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br> <strong>原理图简述:</strong> </p><ol><li><p>客户端请求数据,目标IP为VIP </p></li><li><p>请求数据到达LB服务器,LB根据调度算法将目的地址修改为RIP地址及对应端口(此RIP地址是根据调度算法得出的。)并在连接HASH表中记录下这个连接。</p></li><li>数据包从LB服务器到达RS服务器webserver,然后webserver进行响应。Webserver的网关必须是LB,然后将数据返回给LB服务器。</li><li>收到RS的返回后的数据,根据连接HASH表修改源地址VIP&目标地址CIP,及对应端口80.然后数据就从LB出发到达客户端。</li><li><p>客户端收到的就只能看到VIP\DIP信息。</p><p><strong>NAT模式优缺点:</strong> </p></li></ol><ul><li>NAT技术将请求的报文和响应的报文都需要通过LB进行地址改写,因此网站访问量比较大的时候LB负载均衡调度器有比较大的瓶颈,一般要求最多只能10-20台节点</li><li>只需要在LB上配置一个公网IP地址就可以</li><li>每台内部的节点服务器的网关地址必须是调度器LB的内网地址</li><li>NAT模式支持对IP地址和端口进行转换。即用户请求的端口和真实服务器的端口可以不一致</li></ul><p><img src="https://img-blog.csdnimg.cn/20200423165440179.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><ol><li><p>客户端将请求发往前端的负载均衡器,请求报文源地址是CIP(客户端IP),后面统称为CIP),目标地址为VIP(负载均衡器前端地址,后面统称为VIP)</p></li><li><p>负载均衡器收到报文后,发现请求的是在规则里面存在的地址,那么它将客户端请求报文的目标地址改为了后端服务器的RIP地址并将报文根据算法发送出去</p></li><li><p>报文送到Real Server后,由于报文的目标地址是自己,所以会响应该请求,并将响应报文返还给LVS。</p></li><li><p>然后lvs将此报文的源地址修改为本机并发送给客户端。</p></li></ol><p><strong>优点:</strong> 集群中的物理服务器可以使用任何支持TCP/IP操作系统,只有负载均衡器需要一个合法的IP地址。<br><strong>缺点:</strong> 扩展性有限。当服务器节点(普通PC服务器)增长过多时,负载均衡器将成为整个系统的瓶颈,因为所有的请求包和应答包的流向都经过负载均衡器。当服务器节点过多时,大量的数据包都交汇在负载均衡器那,速度就会变慢</p><h4 id="2-TUN-隧道-模式"><a href="#2-TUN-隧道-模式" class="headerlink" title="2. TUN(隧道)模式"></a>2. TUN(隧道)模式</h4><p>virtual server via ip tunneling模式:采用NAT模式时,由于请求和响应的报文必须通过调度器地址重写,当客户请求越来越多时,调度器处理能力将成为瓶颈。为了解决这个问题,调度器把请求的报文通过IP隧道转发到真实的服务器。真实的服务器将响应处理后的数据直接返回给客户端。这样调度器就只处理请求入站报文,由于一般网络服务应答数据比请求报文大很多,采用VS/TUN模式后,集群系统的最大吞吐量可以提高10倍。 VS/TUN的工作流程图如下所示,它和NAT模式不同的是,它在LB和RS之间的传输不用改写IP地址。而是把客户请求包封装在一个IP tunnel里面,然后发送给RS节点服务器,节点服务器接收到之后解开IP tunnel后,进行响应处理。并且直接把包通过自己的外网地址发送给客户不用经过LB服务器。</p><p><strong>Tunnel原理流程图:</strong><br><img src="https://img-blog.csdnimg.cn/20200423165704792.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><strong>原理图过程简述:</strong> </p><ol><li>客户请求数据包,目标地址VIP发送到LB上。</li><li>LB接收到客户请求包,进行IP Tunnel封装。即在原有的包头加上IP Tunnel的包头。然后发送出去。 </li><li>RS节点服务器根据IP Tunnel包头信息(此时就又一种逻辑上的隐形隧道,只有LB和RS之间懂)收到请求包,然后解开IP Tunnel包头信息,得到客户的请求包并进行响应处理。 </li><li>响应处理完毕之后,RS服务器使用自己的出公网的线路,将这个响应数据包发送给客户端。源IP地址还是VIP地址</li></ol><p><img src="https://img-blog.csdnimg.cn/20200423165736408.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><ol><li>客户端将请求发往前端的负载均衡器,请求报文源地址是CIP,目标地址为VIP。 </li><li>负载均衡器收到报文后,发现请求的是在规则里面存在的地址,那么它将在客户端请求报文的首部再封装一层IP报文,将源地址改为DIP,目标地址改为RIP,并将此包发送给RS。 </li><li>RS收到请求报文后,会首先拆开第一层封装,然后发现里面还有一层IP首部的目标地址是自己lo接口上的VIP,所以会处理次请求报文,并将响应报文通过lo接口送给eth0网卡直接发送给客户端。 </li></ol><blockquote><p>注意: 需要设置lo接口的VIP不能在共网上出现。</p></blockquote><p>总结: </p><ol><li>TUNNEL 模式必须在所有的 realserver 机器上面绑定 VIP 的 IP 地址 </li><li>TUNNEL 模式的 vip ——>realserver 的包通信通过 TUNNEL 模式,不管是内网和外网都能通信,所以不需要 lvs vip 跟 realserver 在同一个网段内 </li><li>TUNNEL 模式 realserver 会把 packet 直接发给 client 不会给 lvs 了</li><li>TUNNEL 模式走的隧道模式,所以运维起来比较难,所以一般不用。 </li></ol><p><strong>优点:</strong> 负载均衡器只负责将请求包分发给后端节点服务器,而RS将应答包直接发给用户。所以,减少了负载均衡器的大量数据流动,负载均衡器不再是系统的瓶颈,就能处理很巨大的请求量,这种方式,一台负载均衡器能够为很多RS进行分发。而且跑在公网上就能进行不同地域的分发。 </p><p><strong>缺点:</strong> 隧道模式的RS节点需要合法IP,这种方式需要所有的服务器支持”IP Tunneling”(IP Encapsulation)协议,服务器可能只局限在部分Linux系统上。</p><h4 id="3-DR模式(直接路由模式"><a href="#3-DR模式(直接路由模式" class="headerlink" title="3. DR模式(直接路由模式)"></a>3. DR模式(直接路由模式)</h4><p><code>Virtual server via direct routing (vs/dr) DR</code>模式是通过改写请求报文的目标MAC地址,将请求发给真实服务器的,而真实服务器响应后的处理结果直接返回给客户端用户。同TUN模式一样,DR模式可以极大的提高集群系统的伸缩性。而且DR模式没有IP隧道的开销,对集群中的真实服务器也没有必要必须支持IP隧道协议的要求。但是要求调度器LB与真实服务器RS都有一块网卡连接到同一物理网段上,必须在同一个局域网环境。 DR模式是互联网使用比较多的一种模式。 </p><p><strong>DR模式原理图:</strong><br><img src="https://img-blog.csdnimg.cn/20200423170032869.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><strong>DR模式原理过程简述:</strong> </p><p> VS/DR模式的工作流程图如上图所示,它的连接调度和管理与NAT和TUN中的一样,它的报文转发方法和前两种不同。DR模式将报文直接路由给目标真实服务器。在DR模式中,调度器根据各个真实服务器的负载情况,连接数多少等,动态地选择一台服务器,不修改目标IP地址和目标端口,也不封装IP报文,而是将请求报文的数据帧的目标MAC地址改为真实服务器的MAC地址。然后再将修改的数据帧在服务器组的局域网上发送。因为数据帧的MAC地址是真实服务器的MAC地址,并且又在同一个局域网。那么根据局域网的通讯原理,真实复位是一定能够收到由LB发出的数据包。真实服务器接收到请求数据包的时候,解开IP包头查看到的目标IP是VIP。(此时只有自己的IP符合目标IP才会接收进来,所以我们需要在本地的回环借口上面配置VIP。</p><blockquote><p>另:由于网络接口都会进行ARP广播响应,但集群的其他机器都有这个VIP的lo接口,都响应就会冲突。所以我们需要把真实服务器的lo接口的ARP响应关闭掉。)然后真实服务器做成请求响应,之后根据自己的路由信息将这个响应数据包发送回给客户,并且源IP地址还是VIP。 </p></blockquote><p><strong>DR模式小结:</strong> </p><ol><li>通过在调度器LB上修改数据包的目的MAC地址实现转发。注意源地址仍然是CIP,目的地址仍然是VIP地址。</li><li>请求的报文经过调度器,而RS响应处理后的报文无需经过调度器LB,因此并发访问量大时使用效率很高(和NAT模式比) </li><li>因为DR模式是通过MAC地址改写机制实现转发,因此所有RS节点和调度器LB只能在一个局域网里面</li><li>RS主机需要绑定VIP地址在LO接口上,并且需要配置ARP抑制。</li><li>RS节点的默认网关不需要配置成LB,而是直接配置为上级路由的网关,能让RS直接出网就可以。 </li><li>由于DR模式的调度器仅做MAC地址的改写,所以调度器LB就不能改写目标端口,那么RS服务器就得使用和VIP相同的端口提供服务</li></ol><p><img src="https://img-blog.csdnimg.cn/20200423170212314.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><ol><li>客户端将请求发往前端的负载均衡器,请求报文源地址是CIP,目标地址为VIP。 </li><li>负载均衡器收到报文后,发现请求的是在规则里面存在的地址,那么它将客户端请求报文的源MAC地址改为自己DIP的MAC地址,目标MAC改为了RIP的MAC地址,并将此包发送给RS。 </li><li>RS发现请求报文中的目的MAC是自己,就会将次报文接收下来,处理完请求报文后,将响应报文通过lo接口送给eth0网卡直接发送给客户端。 </li></ol><blockquote><p>注意: 需要设置lo接口的VIP不能响应本地网络内的arp请求。 </p></blockquote><p><strong>总结:</strong> </p><ol><li>通过在调度器 LB 上修改数据包的目的 MAC 地址实现转发。注意源地址仍然是 CIP,目的地址仍然是 VIP 地址。</li><li>请求的报文经过调度器,而 RS 响应处理后的报文无需经过调度器 LB,因此并发访问量大时使用效率很高(和 NAT 模式比)</li><li>因为 DR 模式是通过 MAC 地址改写机制实现转发,因此所有 RS 节点和调度器 LB 只能在一个局域网里面 </li><li>RS 主机需要绑定 VIP 地址在 LO 接口(掩码32 位)上,并且需要配置 ARP 抑制。</li><li>RS 节点的默认网关不需要配置成 LB,而是直接配置为上级路由的网关,能让 RS 直接出网就可以。 </li><li>由于 DR 模式的调度器仅做 MAC 地址的改写,所以调度器 LB 就不能改写目标端口,那么 RS 服务器就得使用和 VIP 相同的端口提供服务。</li><li>直接对外的业务比如WEB等,RS 的IP最好是使用公网IP。对外的服务,比如数据库等最好使用内网IP。 </li></ol><p><strong>优点:</strong><br>和TUN(隧道模式)一样,负载均衡器也只是分发请求,应答包通过单独的路由方法返回给客户端。与VS-TUN相比,VS-DR这种实现方式不需要隧道结构,因此可以使用大多数操作系统做为物理服务器。 DR模式的效率很高,但是配置稍微复杂一点,因此对于访问量不是特别大的公司可以用haproxy/nginx取代。日1000-2000W PV或者并发请求1万一下都可以考虑用haproxy/nginx。 </p><p><strong>缺点:</strong> 所有 RS 节点和调度器 LB 只能在一个局域网里面。</p><h2 id="在LVS1配置LVS负载均衡"><a href="#在LVS1配置LVS负载均衡" class="headerlink" title="在LVS1配置LVS负载均衡"></a>在LVS1配置LVS负载均衡</h2><h3 id="1-使用centos镜像生成lvs-keep镜像"><a href="#1-使用centos镜像生成lvs-keep镜像" class="headerlink" title="1. 使用centos镜像生成lvs-keep镜像"></a>1. 使用centos镜像生成lvs-keep镜像</h3><ol><li>启动centos容器并进入<br><code>docker run -d --privileged centos:v1 /usr/sbin/init</code><br>2) 在centos容器中使用yum方式安装lvs和keepalived<figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">yum install ipvsadm</span><br><span class="line">yum install keepalived</span><br></pre></td></tr></table></figure></li></ol><p>3) 保存容器为镜像<br><code>docker commit 容器ID lvs-keep</code></p><h3 id="2-使用nginx镜像启动nginx1和nginx2两个容器"><a href="#2-使用nginx镜像启动nginx1和nginx2两个容器" class="headerlink" title="2. 使用nginx镜像启动nginx1和nginx2两个容器"></a>2. 使用nginx镜像启动nginx1和nginx2两个容器</h3><p>1) 创建docker网络<br><code>docker network create --subnet=172.18.0.0/16 cluster</code><br>2) 查看宿主机上的docker网络类型种类<br><code>docker network ls</code><br>3) 启动容器nginx1,nginx2 设定地址为172.18.0.11, 172.18.0.12<br><code>docker run -d --privileged --net cluster --ip 172.18.0.11 --name nginx1 nginx /usr/sbin/init</code><br><code>docker run -d --privileged --net cluster --ip 172.18.0.12 --name nginx2 nginx /usr/sbin/init</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker run -d --privileged --net cluster --ip 172.18.0.11 --name nginx1 nginx /usr/sbin/init</span></span><br><span class="line">8deb9befa966726e16bee8fb4a8eb63ef0c47d66f507092b3bad63e11a348ffd</span><br><span class="line">[root@localhost ~]<span class="comment"># docker run -d --privileged --net cluster --ip 172.18.0.12 --name nginx2 nginx /usr/sbin/init</span></span><br><span class="line">f2fbc74a948461060345899ffd5d0e4e82b7012e2fff793daca3aa78fa4e90b9</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h3 id="3-使用lvs-keep镜像启用LVS1容器,配置LVS负载均衡"><a href="#3-使用lvs-keep镜像启用LVS1容器,配置LVS负载均衡" class="headerlink" title="3. 使用lvs-keep镜像启用LVS1容器,配置LVS负载均衡"></a>3. 使用lvs-keep镜像启用LVS1容器,配置LVS负载均衡</h3><blockquote><p>在宿主机上安装ipvsadm <code>yum install ipvsadm</code> # modprobe ip_vs //装入ip_vs模块<br>1) 启动容器LVS1,设定地址为172.18.0.8<br><code>docker run -d --privileged --net cluster --ip 172.18.0.8 --name LVS1 lvs-keep /usr/sbin/init</code><br>2) 进入LVS1容器<br><code>lsmod |grep ip_vs</code> 列出装载的模块<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@58a00cfe8c9d /]<span class="comment"># lsmod | grep ip_vs</span></span><br><span class="line">ip_vs 145497 0</span><br><span class="line">nf_conntrack 139224 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4</span><br><span class="line">libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack</span><br></pre></td></tr></table></figure></p></blockquote><p>3) 在LVS1创建VIP调度地址<br><code>ifconfig eth0:0 172.18.0.10 netmask 255.255.255.255</code><br>4) 在LVS1创建虚拟服务器,使用轮询方式:<br><code>ipvsadm -At 172.18.0.10:80 -s rr</code><br>5) 在LVS1添加nginx1和nginx2两台服务器节点,采用DR直接路由模式<br><code>ipvsadm -at 172.18.0.10:80 -r 172.18.0.11:80 -g</code><br><code>ipvsadm -at 172.18.0.10:80 -r 172.18.0.12:80 -g</code></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">[root@58a00cfe8c9d /]<span class="comment"># ifconfig eth0:0 172.18.0.10 netmask 255.255.255.255</span></span><br><span class="line">[root@58a00cfe8c9d /]<span class="comment"># ipvsadm -At 172.18.0.10:80 -s rr</span></span><br><span class="line">[root@58a00cfe8c9d /]<span class="comment"># ipvsadm -at 172.18.0.10:80 -r 172.18.0.11:80 -g</span></span><br><span class="line">[root@58a00cfe8c9d /]<span class="comment"># ipvsadm -at 172.18.0.10:80 -r 172.18.0.12:80 -g </span></span><br><span class="line">[root@58a00cfe8c9d /]<span class="comment"># ipvsadm -ln</span></span><br><span class="line">IP Virtual Server version 1.2.1 (size=4096)</span><br><span class="line">Prot LocalAddress:Port Scheduler Flags</span><br><span class="line"> -> RemoteAddress:Port Forward Weight ActiveConn InActConn</span><br><span class="line">TCP 172.18.0.10:80 rr</span><br><span class="line"> -> 172.18.0.11:80 Route 1 0 0 </span><br><span class="line"> -> 172.18.0.12:80 Route 1 0 0 </span><br><span class="line">[root@58a00cfe8c9d /]<span class="comment">#</span></span><br></pre></td></tr></table></figure><p>6) 在nginx1和nginx2两台服务器节点,创建VIP应答地址<br><code>ifconfig lo:0 172.18.0.10 netmask 255.255.255.255</code><br>7) 在nginx1和nginx2两台服务器节点,屏蔽ARP请求<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="built_in">echo</span> <span class="string">"1"</span> > /proc/sys/net/ipv4/conf/lo/arp_ignore </span><br><span class="line"><span class="built_in">echo</span> <span class="string">"1"</span> > /proc/sys/net/ipv4/conf/all/arp_ignore </span><br><span class="line"><span class="built_in">echo</span> <span class="string">"2"</span> > /proc/sys/net/ipv4/conf/lo/arp_announce </span><br><span class="line"><span class="built_in">echo</span> <span class="string">"2"</span> > /proc/sys/net/ipv4/conf/all/arp_announce</span><br></pre></td></tr></table></figure></p><p>8) 在LVS1中,<code>ipvsadm -L</code> 检查配置情况<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">[root@58a00cfe8c9d /]<span class="comment"># ipvsadm -L </span></span><br><span class="line">IP Virtual Server version 1.2.1 (size=4096)</span><br><span class="line">Prot LocalAddress:Port Scheduler Flags</span><br><span class="line"> -> RemoteAddress:Port Forward Weight ActiveConn InActConn</span><br><span class="line">TCP 58a00cfe8c9d:http rr</span><br><span class="line"> -> nginx1.cluster:http Route 1 0 0 </span><br><span class="line"> -> nginx2.cluster:http Route 1 0 0 </span><br><span class="line">[root@58a00cfe8c9d /]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>9) 在宿主机中访问<a href="http://172.18.0.10,刷新时轮流访问两台节点服务器" target="_blank" rel="noopener">http://172.18.0.10,刷新时轮流访问两台节点服务器</a><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx2</span><br><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx1</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h2 id="使用KeepAlive配置LVS高可用"><a href="#使用KeepAlive配置LVS高可用" class="headerlink" title="使用KeepAlive配置LVS高可用"></a>使用KeepAlive配置LVS高可用</h2><blockquote><p>在两台LVS服务器安装配置KeepAlive,使得两台服务器互为备份并支持负载均衡<br>保持任务一中nginx1和nginx2两台服务器节点不变,重新启动容器LVS1和LVS2</p></blockquote><h3 id="1-使用lvs-keep镜像启用LVS1和LVS2容器,配置LVS负载均衡"><a href="#1-使用lvs-keep镜像启用LVS1和LVS2容器,配置LVS负载均衡" class="headerlink" title="1. 使用lvs-keep镜像启用LVS1和LVS2容器,配置LVS负载均衡"></a>1. 使用lvs-keep镜像启用LVS1和LVS2容器,配置LVS负载均衡</h3><blockquote><p>注意:需要在宿主机安装ipvsadm,# modprobe ip_vs //装入ip_vs模块<br>1) 启动容器LVS1,设定地址为172.18.0.8<br><code>docker run -d --privileged --net cluster --ip 172.18.0.8 --name LVS1 lvs-keep /usr/sbin/init</code><br>2) 启动容器LVS2,设定地址为172.18.0.9<br><code>docker run -d --privileged --net cluster --ip 172.18.0.9 --name LVS2 lvs-keep /usr/sbin/init</code><br>3) 编辑LVS1和LVS2中/etc/ keepalived /keepalived.conf文件<br> <figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br></pre></td><td class="code"><pre><span class="line">! Configuration File for keepalived</span><br><span class="line">global_defs {</span><br><span class="line"> notification_email {</span><br><span class="line"> acassen@firewall.loc</span><br><span class="line"> failover@firewall.loc</span><br><span class="line"> sysadmin@firewall.loc</span><br><span class="line"> }</span><br><span class="line"> notification_email_from Alexandre.Cassen@firewall.loc</span><br><span class="line"> smtp_server 192.168.200.1</span><br><span class="line"> smtp_connect_timeout 30</span><br><span class="line"> router_id LVS1</span><br><span class="line"> vrrp_skip_check_adv_addr #跳过vrrp报文地址检查</span><br><span class="line"> #vrrp_strict #严格遵守vrrp协议</span><br><span class="line"> vrrp_garp_interval 3 #在一个网卡上每组gratuitous arp消息之间的延迟时间,默认为0</span><br><span class="line"> vrrp_gna_interval 3 #在一个网卡上每组na消息之间的延迟时间,默认为0</span><br><span class="line">}</span><br><span class="line">vrrp_instance VI_1 {</span><br><span class="line"> state MASTER #LVS2设置为BACKUP</span><br><span class="line"> interface eth0</span><br><span class="line"> virtual_router_id 51</span><br><span class="line"> priority 100 #L 设置权重</span><br><span class="line"> advert_int 1</span><br><span class="line"> authentication {</span><br><span class="line"> auth_type PASS</span><br><span class="line"> auth_pass 1111</span><br><span class="line"> }</span><br><span class="line"> virtual_ipaddress {</span><br><span class="line"> 172.18.0.10</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line">virtual_server 172.18.0.10 80 { #配置虚拟服务器</span><br><span class="line"> delay_loop 6 #设置健康检查时间,单位是秒</span><br><span class="line"> lb_algo rr #设置负载调度算法,默认为rr即轮询算法</span><br><span class="line"> lb_kind DR #设置LVS实现LB机制,有NAT、TUNN和DR三个模式可选</span><br><span class="line"> persistence_timeout 0 #会话保持时间,单位为秒,设为0可以看到刷新效果</span><br><span class="line"> protocol TCP #指定转发协议类型,有TCP和UDP两种</span><br><span class="line"> real_server 172.18.0.11 80 { #配置服务器节点</span><br><span class="line"> weight 1</span><br><span class="line"> TCP_CHECK { #配置节点权值,数字越大权值越高</span><br><span class="line"> connect_timeout 3 #超时时间</span><br><span class="line"> retry 3 #重试次数</span><br><span class="line"> delay_before_retry 3 #重试间隔</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> real_server 172.18.0.12 80 {</span><br><span class="line"> weight 1</span><br><span class="line"> TCP_CHECK {</span><br><span class="line"> connect_timeout 3</span><br><span class="line"> retry 3</span><br><span class="line"> delay_before_retry 3</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p></blockquote><h3 id="2-验证KeepAlive配置LVS高可用集群"><a href="#2-验证KeepAlive配置LVS高可用集群" class="headerlink" title="2. 验证KeepAlive配置LVS高可用集群"></a>2. 验证KeepAlive配置LVS高可用集群</h3><p>1) 在两台服务器重启keepalived服务,i<code>pvsadm -L</code>检查配置情况<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">[root@ef99a927fc2d /]<span class="comment"># ipvsadm -L</span></span><br><span class="line">IP Virtual Server version 1.2.1 (size=4096)</span><br><span class="line">Prot LocalAddress:Port Scheduler Flags</span><br><span class="line"> -> RemoteAddress:Port Forward Weight ActiveConn InActConn</span><br><span class="line">TCP 172.18.0.10:http rr</span><br><span class="line"> -> nginx1.cluster:http Route 1 0 0 </span><br><span class="line"> -> nginx2.cluster:http Route 1 0 0 </span><br><span class="line">[root@ef99a927fc2d /]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">[root@a033e26a1fd8 /]<span class="comment"># ipvsadm -L</span></span><br><span class="line">IP Virtual Server version 1.2.1 (size=4096)</span><br><span class="line">Prot LocalAddress:Port Scheduler Flags</span><br><span class="line"> -> RemoteAddress:Port Forward Weight ActiveConn InActConn</span><br><span class="line">TCP a033e26a1fd8:http rr</span><br><span class="line"> -> nginx1.cluster:http Route 1 0 0 </span><br><span class="line"> -> nginx2.cluster:http Route 1 0 0</span><br></pre></td></tr></table></figure><p>2) 在宿主机中访问<a href="http://172.18.0.10,刷新时轮流访问两台节点服务器" target="_blank" rel="noopener">http://172.18.0.10,刷新时轮流访问两台节点服务器</a><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx2</span><br><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx1</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>3) 在LVS1服务器#ifconfig eth0 down //当掉服务器网卡</p><p>4) 在宿主机中访问<a href="http://172.18.0.10,刷新时轮流访问两台节点服务器" target="_blank" rel="noopener">http://172.18.0.10,刷新时轮流访问两台节点服务器</a></p><p>5) 在LVS2中,#ipvsadm -L //检查配置和连接情况<br>lvs2中可以看到<code>InActConn</code>增加</p><p>因为lvs1将eth0关闭以后, 有lvs2接管服务<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">[root@ef99a927fc2d /]<span class="comment"># ipvsadm -L</span></span><br><span class="line">IP Virtual Server version 1.2.1 (size=4096)</span><br><span class="line">Prot LocalAddress:Port Scheduler Flags</span><br><span class="line"> -> RemoteAddress:Port Forward Weight ActiveConn InActConn</span><br><span class="line">TCP ef99a927fc2d:http rr</span><br><span class="line"> -> nginx1.cluster:http Route 1 0 3 </span><br><span class="line"> -> nginx2.cluster:http Route 1 0 3 </span><br><span class="line">[root@ef99a927fc2d /]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p>]]></content>
<summary type="html">
<h2 id="实验要求"><a href="#实验要求" class="headerlink" title="实验要求"></a>实验要求</h2><p>1、 安装配置LVS负载均衡<br>2、 安装配置LVS高可用负载均衡</p>
<p>拓扑图:<br><img
</summary>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/categories/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/tags/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="Docker" scheme="https://plutoacharon.github.io/tags/Docker/"/>
</entry>
<entry>
<title>Kubernetes(K8s)入门到实践(六)----深入掌握Pod</title>
<link href="https://plutoacharon.github.io/2020/04/21/Kubernetes-K8s-%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E8%B7%B5-%E5%85%AD-%E6%B7%B1%E5%85%A5%E6%8E%8C%E6%8F%A1Pod/"/>
<id>https://plutoacharon.github.io/2020/04/21/Kubernetes-K8s-入门到实践-六-深入掌握Pod/</id>
<published>2020-04-21T09:55:33.000Z</published>
<updated>2020-04-21T09:55:47.368Z</updated>
<content type="html"><![CDATA[<p>上几章写了Kubernetes的基本概念与集群搭建<br>接下来将深入探索Pod的应用、配置、调度、升级及扩缩容,讲述Kubernetes容器编排。</p><p>本章将对Kubernetes如何发布与管理容器应用进行详细说明和示例,主要包括Pod和容器的使用、应用配置管理、Pod的控制和调度管理、Pod的升级和回滚,以及Pod的扩缩容机制等内容</p><h2 id="深入掌握Pod"><a href="#深入掌握Pod" class="headerlink" title="深入掌握Pod"></a>深入掌握Pod</h2><h3 id="Pod定义"><a href="#Pod定义" class="headerlink" title="Pod定义"></a>Pod定义</h3><p>Pod定义文件的yaml格式完整版<br><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span> <span class="comment">#必选,版本号,例如v1,版本号必须可以用 kubectl api-versions 查询到 .</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Pod</span> <span class="comment">#必选,Pod</span></span><br><span class="line"><span class="attr">metadata:</span> <span class="comment">#必选,元数据</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">string</span> <span class="comment">#必选,Pod名称</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">string</span> <span class="comment">#必选,Pod所属的命名空间,默认为"default"</span></span><br><span class="line"><span class="attr"> labels:</span> <span class="comment">#自定义标签</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span> <span class="comment">#自定义标签名字</span></span><br><span class="line"><span class="attr"> annotations:</span> <span class="comment">#自定义注释列表</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span></span><br><span class="line"><span class="attr">spec:</span> <span class="comment">#必选,Pod中容器的详细定义</span></span><br><span class="line"><span class="attr"> containers:</span> <span class="comment">#必选,Pod中容器列表</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span> <span class="comment">#必选,容器名称,需符合RFC 1035规范</span></span><br><span class="line"><span class="attr"> image:</span> <span class="string">string</span> <span class="comment">#必选,容器的镜像名称</span></span><br><span class="line"><span class="attr"> imagePullPolicy:</span> <span class="string">[</span> <span class="string">Always|Never|IfNotPresent</span> <span class="string">]</span> <span class="comment">#获取镜像的策略 Alawys表示下载镜像 IfnotPresent表示优先使用本地镜像,否则下载镜像,Nerver表示仅使用本地镜像</span></span><br><span class="line"><span class="attr"> command:</span> <span class="string">[string]</span> <span class="comment">#容器的启动命令列表,如不指定,使用打包时使用的启动命令</span></span><br><span class="line"><span class="attr"> args:</span> <span class="string">[string]</span> <span class="comment">#容器的启动命令参数列表</span></span><br><span class="line"><span class="attr"> workingDir:</span> <span class="string">string</span> <span class="comment">#容器的工作目录</span></span><br><span class="line"><span class="attr"> volumeMounts:</span> <span class="comment">#挂载到容器内部的存储卷配置</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span> <span class="comment">#引用pod定义的共享存储卷的名称,需用volumes[]部分定义的的卷名</span></span><br><span class="line"><span class="attr"> mountPath:</span> <span class="string">string</span> <span class="comment">#存储卷在容器内mount的绝对路径,应少于512字符</span></span><br><span class="line"><span class="attr"> readOnly:</span> <span class="string">boolean</span> <span class="comment">#是否为只读模式</span></span><br><span class="line"><span class="attr"> ports:</span> <span class="comment">#需要暴露的端口库号列表</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span> <span class="comment">#端口的名称</span></span><br><span class="line"><span class="attr"> containerPort:</span> <span class="string">int</span> <span class="comment">#容器需要监听的端口号</span></span><br><span class="line"><span class="attr"> hostPort:</span> <span class="string">int</span> <span class="comment">#容器所在主机需要监听的端口号,默认与Container相同</span></span><br><span class="line"><span class="attr"> protocol:</span> <span class="string">string</span> <span class="comment">#端口协议,支持TCP和UDP,默认TCP</span></span><br><span class="line"><span class="attr"> env:</span> <span class="comment">#容器运行前需设置的环境变量列表</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span> <span class="comment">#环境变量名称</span></span><br><span class="line"><span class="attr"> value:</span> <span class="string">string</span> <span class="comment">#环境变量的值</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="comment">#资源限制和请求的设置</span></span><br><span class="line"><span class="attr"> limits:</span> <span class="comment">#资源限制的设置</span></span><br><span class="line"><span class="attr"> cpu:</span> <span class="string">string</span> <span class="comment">#Cpu的限制,单位为core数,将用于docker run --cpu-shares参数</span></span><br><span class="line"><span class="attr"> memory:</span> <span class="string">string</span> <span class="comment">#内存限制,单位可以为Mib/Gib,将用于docker run --memory参数</span></span><br><span class="line"><span class="attr"> requests:</span> <span class="comment">#资源请求的设置</span></span><br><span class="line"><span class="attr"> cpu:</span> <span class="string">string</span> <span class="comment">#Cpu请求,容器启动的初始可用数量</span></span><br><span class="line"><span class="attr"> memory:</span> <span class="string">string</span> <span class="comment">#内存请求,容器启动的初始可用数量</span></span><br><span class="line"><span class="attr"> livenessProbe:</span> <span class="comment">#对Pod内各容器健康检查的设置,当探测无响应几次后将自动重启该容器,检查方法有exec、httpGet和tcpSocket,对一个容器只需设置其中一种方法即可</span></span><br><span class="line"><span class="attr"> exec:</span> <span class="comment">#对Pod容器内检查方式设置为exec方式</span></span><br><span class="line"><span class="attr"> command:</span> <span class="string">[string]</span> <span class="comment">#exec方式需要制定的命令或脚本</span></span><br><span class="line"><span class="attr"> httpGet:</span> <span class="comment">#对Pod内个容器健康检查方法设置为HttpGet,需要制定Path、port</span></span><br><span class="line"><span class="attr"> path:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> port:</span> <span class="string">number</span></span><br><span class="line"><span class="attr"> host:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> scheme:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> HttpHeaders:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> value:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> tcpSocket:</span> <span class="comment">#对Pod内个容器健康检查方式设置为tcpSocket方式</span></span><br><span class="line"><span class="attr"> port:</span> <span class="string">number</span></span><br><span class="line"><span class="attr"> initialDelaySeconds:</span> <span class="number">0</span> <span class="comment">#容器启动完成后首次探测的时间,单位为秒</span></span><br><span class="line"><span class="attr"> timeoutSeconds:</span> <span class="number">0</span> <span class="comment">#对容器健康检查探测等待响应的超时时间,单位秒,默认1秒</span></span><br><span class="line"><span class="attr"> periodSeconds:</span> <span class="number">0</span> <span class="comment">#对容器监控检查的定期探测时间设置,单位秒,默认10秒一次</span></span><br><span class="line"><span class="attr"> successThreshold:</span> <span class="number">0</span></span><br><span class="line"><span class="attr"> failureThreshold:</span> <span class="number">0</span></span><br><span class="line"><span class="attr"> securityContext:</span></span><br><span class="line"><span class="attr"> privileged:</span> <span class="literal">false</span></span><br><span class="line"><span class="attr"> restartPolicy:</span> <span class="string">[Always</span> <span class="string">| Never | OnFailure] #Pod的重启策略,Always表示一旦不管以何种方式终止运行,kubelet都将重启,OnFailure表示只有Pod以非0退出码退出才重启,Nerver表示不再重启该Pod</span></span><br><span class="line"><span class="string"></span><span class="attr"> nodeSelector:</span> <span class="string">obeject</span> <span class="comment">#设置NodeSelector表示将该Pod调度到包含这个label的node上,以key:value的格式指定</span></span><br><span class="line"><span class="attr"> imagePullSecrets:</span> <span class="comment">#Pull镜像时使用的secret名称,以key:secretkey格式指定</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> hostNetwork:</span> <span class="literal">false</span> <span class="comment">#是否使用主机网络模式,默认为false,如果设置为true,表示使用宿主机网络</span></span><br><span class="line"><span class="attr"> volumes:</span> <span class="comment">#在该pod上定义共享存储卷列表</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">string</span> <span class="comment">#共享存储卷名称 (volumes类型有很多种)</span></span><br><span class="line"><span class="attr"> emptyDir:</span> <span class="string">{}</span> <span class="comment">#类型为emtyDir的存储卷,与Pod同生命周期的一个临时目录。为空值</span></span><br><span class="line"><span class="attr"> hostPath:</span> <span class="string">string</span> <span class="comment">#类型为hostPath的存储卷,表示挂载Pod所在宿主机的目录</span></span><br><span class="line"><span class="attr"> path:</span> <span class="string">string</span> <span class="comment">#Pod所在宿主机的目录,将被用于同期中mount的目录</span></span><br><span class="line"><span class="attr"> secret:</span> <span class="comment">#类型为secret的存储卷,挂载集群与定义的secre对象到容器内部</span></span><br><span class="line"><span class="attr"> scretname:</span> <span class="string">string</span> </span><br><span class="line"><span class="attr"> items:</span> </span><br><span class="line"><span class="attr"> - key:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> path:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> configMap:</span> <span class="comment">#类型为configMap的存储卷,挂载预定义的configMap对象到容器内部</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> items:</span></span><br><span class="line"><span class="attr"> - key:</span> <span class="string">string</span></span><br><span class="line"><span class="attr"> path:</span> <span class="string">string</span></span><br></pre></td></tr></table></figure></p><h3 id="静态Pod"><a href="#静态Pod" class="headerlink" title="静态Pod"></a>静态Pod</h3><p>静态Pod是由kubelet进行管理的仅存在于特定Node上的Pod。</p><p>它们不能通过API Server进行管理,无法与ReplicationController、Deployment或者DaemonSet进行关联,并且kubelet无法对它们进行健康检查。</p><p>静态Pod总是由kubelet创建的,并且总在kubelet所在的Node上运行。创建静态Pod有两种方式:</p><ul><li>配置文件方式</li><li>HTTP方式<figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Pod</span></span><br><span class="line"><span class="attr">metadata:</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">pod-demo</span> </span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">default</span> </span><br><span class="line"><span class="attr"> labels:</span> </span><br><span class="line"><span class="attr"> app:</span> <span class="string">myapp</span></span><br><span class="line"><span class="attr">spec:</span> </span><br><span class="line"><span class="attr"> containers:</span> </span><br><span class="line"><span class="attr"> - name:</span> <span class="string">myapp-1</span> </span><br><span class="line"><span class="attr"> image:</span> <span class="string">plutoacharon/myapp:v1</span> </span><br><span class="line"><span class="attr"> - name:</span> <span class="string">busybox-1</span> </span><br><span class="line"><span class="attr"> image:</span> <span class="attr">busybox:latest</span> </span><br><span class="line"><span class="attr"> command:</span> <span class="bullet">-</span> <span class="string">"/bin/sh"</span> <span class="bullet">-</span> <span class="string">"-c"</span> <span class="bullet">-</span> <span class="string">"sleep 3600"</span></span><br></pre></td></tr></table></figure></li></ul><h3 id="Pod容器共享Volume"><a href="#Pod容器共享Volume" class="headerlink" title="Pod容器共享Volume"></a>Pod容器共享Volume</h3><p>同一个Pod中的多个容器能够共享Pod级别的存储卷Volume。</p><p>Volume可以被定义为各种类型,多个容器各自进行挂载操作,将一个Volume挂载为容器内部需要的目录<br><img src="https://img-blog.csdnimg.cn/20200420212747115.png" alt="在这里插入图片描述"><br><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Pod</span></span><br><span class="line"><span class="attr">metadata:</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">test-pd</span></span><br><span class="line"><span class="attr">spec:</span> </span><br><span class="line"><span class="attr"> containers:</span> </span><br><span class="line"><span class="attr"> - image:</span> <span class="string">k8s.gcr.io/test-webserver</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">test-container</span> </span><br><span class="line"><span class="attr"> volumeMounts:</span> </span><br><span class="line"><span class="attr"> - mountPath:</span> <span class="string">/cache</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">cache-volume</span> </span><br><span class="line"><span class="attr"> volumes:</span> </span><br><span class="line"><span class="attr"> name:</span> <span class="string">cache-volume</span> </span><br><span class="line"><span class="attr"> emptyDir:</span> <span class="string">{}</span></span><br></pre></td></tr></table></figure></p><h4 id="emptyDir"><a href="#emptyDir" class="headerlink" title="emptyDir"></a>emptyDir</h4><p>当 Pod 被分配给节点时,首先创建emptyDir卷,并且只要该 Pod 在该节点上运行,该卷就会存在。</p><p>正如卷的名字所述,它最初是空的。Pod 中的容器可以读取和写入emptyDir卷中的相同文件,尽管该卷可以挂载到每个容器中的相同或不同路径上。</p><p>当出于任何原因从节点中删除 Pod 时,emptyDir中的数据将被永久删除</p><p>emptyDir的用法有:</p><ul><li><p>暂存空间,例如用于基于磁盘的合并排序</p></li><li><p>用作长时间计算崩溃恢复时的检查点</p></li><li><p>Web服务器容器提供数据时,保存内容管理器容器提取的文件</p></li></ul><h3 id="ConfigMap概述"><a href="#ConfigMap概述" class="headerlink" title="ConfigMap概述"></a>ConfigMap概述</h3><p>ConfigMap 功能在 Kubernetes1.2 版本中引入,许多应用程序会从配置文件、命令行参数或环境变量中读取配置信息。</p><p>ConfigMap API 给我们提供了向容器中注入配置信息的机制,ConfigMap 可以被用来保存单个属性,也可以用来保存整个配置文件或者 JSON 二进制大对象</p><p>ConfigMap供容器使用的典型用法如下。</p><ul><li>生成为容器内的环境变量。</li><li>设置容器启动命令的启动参数(需设置为环境变量)</li><li>以Volume的形式挂载为容器内部的文件或目录。</li></ul><p>ConfigMap以一个或多个key:value的形式保存在Kubernetes系统中供应用使用,既可以用于表示一个变量的值(例如apploglevel=info),也可以用于表示一个完整配置文件的内容(例如server.xml=<?xml…>…)</p><p>可以通过YAML配置文件或者直接使用kubectl create configmap命令行的方式来创建ConfigMap。</p><p>使用ConfigMap的限制条件使用ConfigMap的限制条件如下。</p><ul><li>ConfigMap必须在Pod之前创建。</li><li>ConfigMap受Namespace限制,只有处于相同Namespace中的Pod才可以引用它。</li><li>ConfigMap中的配额管理还未能实现。</li><li>kubelet只支持可以被API Server管理的Pod使用ConfigMap。kubelet在本Node上通过 –manifest-url或–config自动创建的静态Pod将无法引用ConfigMap。</li><li>在Pod对ConfigMap进行挂载(volumeMount)操作时,在容器内部只能挂载为“目录”,无法挂载为“文件”。在挂载到容器内部后,在目录下将包含ConfigMap定义的每个item,如果在该目录下原来还有其他文件,则容器内的该目录将被挂载的ConfigMap覆盖。如果应用程序需要保留原来的其他文件,则需要进行额外的处理。可以将ConfigMap挂载到容器内部的临时目录,再通过启动脚本将配置文件复制或者链接到(cp或link命令)应用所用的实际配置目录下</li></ul><h3 id="容器内获取Pod信息(DownwardAPI)"><a href="#容器内获取Pod信息(DownwardAPI)" class="headerlink" title="容器内获取Pod信息(DownwardAPI)"></a>容器内获取Pod信息(DownwardAPI)</h3><p>我们知道,每个Pod在被成功创建出来之后,都会被系统分配唯一的名字、IP地址,并且处于某个Namespace中,那么我们如何在Pod的容器内获取Pod的这些重要信息呢?答案就是使用Downward API。</p><p>Downward API可以通过以下两种方式将Pod信息注入容器内部。</p><ul><li>环境变量:用于单个变量,可以将Pod信息和Container信息注入容器内部。</li><li>Volume挂载:将数组类信息生成为文件并挂载到容器内部。</li></ul><h3 id="Pod生命周期和重启策略"><a href="#Pod生命周期和重启策略" class="headerlink" title="Pod生命周期和重启策略"></a>Pod生命周期和重启策略</h3><p>挂起(Pending):Pod已被Kubernetes系统接受,但有一个或者多个容器镜像尚未创建。等待时间包括调度Pod的时间和通过网络下载镜像的时间,这可能需要花点时间</p><p>运行中(Running):该Pod已经绑定到了一个节点上,Pod中所有的容器都已被创建。至少有一个容器正在运行,或者正处于启动或重启状态成功(Succeeded):Pod中的所有容器都被成功终止,并且不会再重启</p><p>失败(Failed):Pod中的所有容器都已终止了,并且至少有一个容器是因为失败终止。也就是说,容器以非0状态退出或者被系统终止</p><p>未知(Unknown):因为某些原因无法取得Pod的状态,通常是因为与Pod所在主机通信失败</p><p>Pod的重启策略(RestartPolicy)应用于Pod内的所有容器,并且仅在Pod所处的Node上由kubelet进行判断和重启操作。当某个容器异常退出或者健康检查失败时,kubelet将根据RestartPolicy的设置来进行相应的操作。Pod的重启策略包括Always、OnFailure和Never,默认值为Always。</p><ul><li>Always:当容器失效时,由kubelet自动重启该容器。</li><li>OnFailure:当容器终止运行且退出码不为0时,由kubelet自动重启该容器。</li><li>Never:不论容器运行状态如何,kubelet都不会重启该容器。</li></ul><p>kubelet重启失效容器的时间间隔以sync-frequency乘以2n来计算,例如1、2、4、8倍等,最长延时5min,并且在成功重启后的10min后重置该时间。</p><p>Pod的重启策略与控制方式息息相关,当前可用于管理Pod的控制器包ReplicationController、Job、DaemonSet及直接通过kubelet管理(静态Pod)。每种控制器对Pod的重启策略要求如下</p><ul><li>RC和DaemonSet:必须设置为Always,需要保证该容器持续运行。</li><li>Job:OnFailure或Never,确保容器执行完成后不再重启。</li><li>kubelet:在Pod失效时自动重启它,不论将RestartPolicy设置为什么值,也不会对Pod进行健康检查</li></ul><h3 id="Pod健康检查和服务可用性检查"><a href="#Pod健康检查和服务可用性检查" class="headerlink" title="Pod健康检查和服务可用性检查"></a>Pod健康检查和服务可用性检查</h3><p>Kubernetes 对 Pod 的健康状态可以通过两类探针来检查:LivenessProbe 和ReadinessProbe,kubelet定期执行这两类探针来诊断容器的健康状况。</p><ul><li>LivenessProbe探针:用于判断容器是否存活(Running状态),如果LivenessProbe探针探测到容器不健康,则kubelet将杀掉该容器,并根据容器的重启策略做相应的处理。如果一个容器不包含LivenessProbe探针,那么kubelet认为该容器的LivenessProbe探针返回的值永远是Success。</li><li>ReadinessProbe探针:用于判断容器服务是否可用(Ready状态),达到Ready状态的Pod才可以接收请求。对于被Service管理的Pod,Service与Pod Endpoint的关联关系也将基于Pod是否Ready进行设置。如果在运行过程中Ready状态变为False,则系统自动将其从Service的后端Endpoint列表中隔离出去,后续再把恢复到Ready状态的Pod加回后端Endpoint列表。这样就能保证客户端在访问Service时不会被转发到服务不可用的Pod实例上。</li></ul>]]></content>
<summary type="html">
<p>上几章写了Kubernetes的基本概念与集群搭建<br>接下来将深入探索Pod的应用、配置、调度、升级及扩缩容,讲述Kubernetes容器编排。</p>
<p>本章将对Kubernetes如何发布与管理容器应用进行详细说明和示例,主要包括Pod和容器的使用、应用配置管理
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>Python算法学习: 2020年蓝桥杯省赛模拟赛-Python题解</title>
<link href="https://plutoacharon.github.io/2020/04/21/Python%E7%AE%97%E6%B3%95%E5%AD%A6%E4%B9%A0-2020%E5%B9%B4%E8%93%9D%E6%A1%A5%E6%9D%AF%E7%9C%81%E8%B5%9B%E6%A8%A1%E6%8B%9F%E8%B5%9B-Python%E9%A2%98%E8%A7%A3/"/>
<id>https://plutoacharon.github.io/2020/04/21/Python算法学习-2020年蓝桥杯省赛模拟赛-Python题解/</id>
<published>2020-04-21T09:54:56.000Z</published>
<updated>2020-04-21T10:01:48.090Z</updated>
<content type="html"><![CDATA[<h2 id="目录"><a href="#目录" class="headerlink" title="目录"></a>目录</h2><h3 id="填空题1"><a href="#填空题1" class="headerlink" title="填空题1"></a>填空题1</h3><p>问题描述<br> 一个包含有2019个结点的无向连通图,最少包含多少条边?<br>答案提交<br> 这是一道结果填空的题,你只需要算出结果后提交即可。本题的结果为一个整数,在提交答案时只填写这个整数,填写多余的内容将无法得分。<br>答案 :2018</p><h3 id="填空题2"><a href="#填空题2" class="headerlink" title="填空题2"></a>填空题2</h3><p>问题描述<br> 将LANQIAO中的字母重新排列,可以得到不同的单词,如LANQIAO、AAILNOQ等,注意这7个字母都要被用上,单词不一定有具体的英文意义。<br> 请问,总共能排列如多少个不同的单词。<br>答案提交<br> 这是一道结果填空的题,你只需要算出结果后提交即可。本题的结果为一个整数,在提交答案时只填写这个整数,填写多余的内容将无法得分。<br>答案 :2520</p><h3 id="填空题3"><a href="#填空题3" class="headerlink" title="填空题3"></a>填空题3</h3><p>问题描述<br> 在计算机存储中,12.5MB是多少字节?<br>答案提交<br> 这是一道结果填空的题,你只需要算出结果后提交即可。本题的结果为一个整数,在提交答案时只填写这个整数,填写多余的内容将无法得分。<br>答案 :13107200</p><h3 id="填空题4"><a href="#填空题4" class="headerlink" title="填空题4"></a>填空题4</h3><p>问题描述<br> 由1对括号,可以组成一种合法括号序列:()。<br> 由2对括号,可以组成两种合法括号序列:()()、(())。<br> 由4对括号组成的合法括号序列一共有多少种?<br>答案提交<br> 这是一道结果填空的题,你只需要算出结果后提交即可。本题的结果为一个整数,在提交答案时只填写这个整数,填写多余的内容将无法得分。<br>答案 :14</p><h3 id="编程题1-凯撒密码加密"><a href="#编程题1-凯撒密码加密" class="headerlink" title="编程题1 凯撒密码加密"></a>编程题1 凯撒密码加密</h3><p>问题描述<br> 给定一个单词,请使用凯撒密码将这个单词加密。<br> 凯撒密码是一种替换加密的技术,单词中的所有字母都在字母表上向后偏移3位后被替换成密文。即a变为d,b变为e,…,w变为z,x变为a,y变为b,z变为c。<br> 例如,lanqiao会变成odqtldr。<br>输入格式<br> 输入一行,包含一个单词,单词中只包含小写英文字母。<br>输出格式<br> 输出一行,表示加密后的密文。<br>样例输入<br>lanqiao<br>样例输出<br>odqtldr<br>评测用例规模与约定<br> 对于所有评测用例,单词中的字母个数不超过100<br><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">ans = <span class="string">""</span></span><br><span class="line">strq = list(input())</span><br><span class="line"><span class="keyword">for</span> i <span class="keyword">in</span> range(len(strq)):</span><br><span class="line"> <span class="keyword">if</span> <span class="number">97</span> <= ord(strq[i]) <= <span class="number">119</span>:</span><br><span class="line"> strq[i] = chr(ord(strq[i]) + <span class="number">3</span>)</span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> strq[i] = chr(ord(strq[i]) - <span class="number">120</span> + <span class="number">97</span>)</span><br><span class="line"><span class="keyword">for</span> i <span class="keyword">in</span> range(len(strq)):</span><br><span class="line"> ans += strq[i]</span><br><span class="line">print(ans)</span><br></pre></td></tr></table></figure></p><h3 id="编程题2-反倍数"><a href="#编程题2-反倍数" class="headerlink" title="编程题2 反倍数"></a>编程题2 反倍数</h3><p>问题描述<br> 给定三个整数 a, b, c,如果一个整数既不是 a 的整数倍也不是 b 的整数倍还不是 c 的整数倍,则这个数称为反倍数。<br> 请问在 1 至 n 中有多少个反倍数。<br>输入格式<br> 输入的第一行包含一个整数 n。<br> 第二行包含三个整数 a, b, c,相邻两个数之间用一个空格分隔。<br>输出格式<br> 输出一行包含一个整数,表示答案。<br>样例输入<br>30<br>2 3 6<br>样例输出<br>10<br>样例说明<br> 以下这些数满足要求:1, 5, 7, 11, 13, 17, 19, 23, 25, 29。<br>评测用例规模与约定<br> 对于 40% 的评测用例,1 <= n <= 10000。<br> 对于 80% 的评测用例,1 <= n <= 100000。<br> 对于所有评测用例,1 <= n <= 1000000,1 <= a <= n,1 <= b <= n,1 <= c <= n。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">n = int(input())</span><br><span class="line">ans = <span class="number">0</span></span><br><span class="line">a,b,c = map(int, input().split())</span><br><span class="line"><span class="keyword">for</span> i <span class="keyword">in</span> range(<span class="number">1</span>, n+<span class="number">1</span>):</span><br><span class="line"> <span class="keyword">if</span> i % a != <span class="number">0</span> <span class="keyword">and</span> i % b != <span class="number">0</span> <span class="keyword">and</span> i % c != <span class="number">0</span>:</span><br><span class="line"> ans += <span class="number">1</span></span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> <span class="keyword">continue</span></span><br><span class="line">print(ans)</span><br></pre></td></tr></table></figure><h3 id="编程题3-摆动序列"><a href="#编程题3-摆动序列" class="headerlink" title="编程题3 摆动序列"></a>编程题3 摆动序列</h3><p>问题描述<br> 如果一个序列的奇数项都比前一项大,偶数项都比前一项小,则称为一个摆动序列。即 a[2i]<a[2i-1], a[2i+1]>a[2i]。<br> 小明想知道,长度为 m,每个数都是 1 到 n 之间的正整数的摆动序列一共有多少个。<br>输入格式<br> 输入一行包含两个整数 m,n。<br>输出格式<br> 输出一个整数,表示答案。答案可能很大,请输出答案除以10000的余数。<br>样例输入<br>3 4<br>样例输出<br>14<br>样例说明<br> 以下是符合要求的摆动序列:<br> 2 1 2<br> 2 1 3<br> 2 1 4<br> 3 1 2<br> 3 1 3<br> 3 1 4<br> 3 2 3<br> 3 2 4<br> 4 1 2<br> 4 1 3<br> 4 1 4<br> 4 2 3<br> 4 2 4<br> 4 3 4<br>评测用例规模与约定<br> 对于 20% 的评测用例,1 <= n, m <= 5;<br> 对于 50% 的评测用例,1 <= n, m <= 10;<br> 对于 80% 的评测用例,1 <= n, m <= 100;<br> 对于所有评测用例,1 <= n, m <= 1000。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line">ans = <span class="number">0</span></span><br><span class="line">m, n = map(int, input().split())</span><br><span class="line">dp = [[<span class="number">0</span> <span class="keyword">for</span> _ <span class="keyword">in</span> range(<span class="number">1024</span>)] <span class="keyword">for</span> _ <span class="keyword">in</span> range(<span class="number">1024</span>)]</span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> i <span class="keyword">in</span> range(<span class="number">1</span>, n + <span class="number">1</span>):</span><br><span class="line"> dp[<span class="number">1</span>][i] = n - i + <span class="number">1</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> i <span class="keyword">in</span> range(<span class="number">2</span>, m+<span class="number">1</span>):</span><br><span class="line"> <span class="keyword">if</span> i & <span class="number">1</span>:</span><br><span class="line"> <span class="keyword">for</span> j <span class="keyword">in</span> range(n , <span class="number">0</span>, <span class="number">-1</span>):</span><br><span class="line"> dp[i][j] = (dp[i - <span class="number">1</span>][j - <span class="number">1</span>] + dp[i][j + <span class="number">1</span>]) % <span class="number">10000</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> <span class="keyword">for</span> j <span class="keyword">in</span> range(<span class="number">1</span>, n+<span class="number">1</span>):</span><br><span class="line"> dp[i][j] = (dp[i - <span class="number">1</span>][j + <span class="number">1</span>] + dp[i][j - <span class="number">1</span>]) % <span class="number">10000</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> m & <span class="number">1</span>:</span><br><span class="line"> ans = dp[m][<span class="number">1</span>]</span><br><span class="line"><span class="keyword">else</span>:</span><br><span class="line"> ans = dp[m][n]</span><br><span class="line">print(ans)</span><br></pre></td></tr></table></figure><h3 id="编程题4-螺旋矩阵"><a href="#编程题4-螺旋矩阵" class="headerlink" title="编程题4 螺旋矩阵"></a>编程题4 螺旋矩阵</h3><p>问题描述<br> 对于一个 n 行 m 列的表格,我们可以使用螺旋的方式给表格依次填上正整数,我们称填好的表格为一个螺旋矩阵。<br> 例如,一个 4 行 5 列的螺旋矩阵如下:<br> 1 2 3 4 5<br> 14 15 16 17 6<br> 13 20 19 18 7<br> 12 11 10 9 8<br>输入格式<br> 输入的第一行包含两个整数 n, m,分别表示螺旋矩阵的行数和列数。<br> 第二行包含两个整数 r, c,表示要求的行号和列号。<br>输出格式<br> 输出一个整数,表示螺旋矩阵中第 r 行第 c 列的元素的值。<br>样例输入<br>4 5<br>2 2<br>样例输出<br>15<br>评测用例规模与约定<br> 对于 30% 的评测用例,2 <= n, m <= 20。<br> 对于 70% 的评测用例,2 <= n, m <= 100。<br> 对于所有评测用例,2 <= n, m <= 1000,1 <= r <= n,1 <= c <= m。<br><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br></pre></td><td class="code"><pre><span class="line">n, m = map(int, input().split())</span><br><span class="line">r, c = map(int, input().split())</span><br><span class="line">ansList = [[<span class="number">0</span> <span class="keyword">for</span> _ <span class="keyword">in</span> range(m)] <span class="keyword">for</span> _ <span class="keyword">in</span> range(n)]</span><br><span class="line">vis = [[<span class="number">0</span> <span class="keyword">for</span> _ <span class="keyword">in</span> range(m)] <span class="keyword">for</span> _ <span class="keyword">in</span> range(n)]</span><br><span class="line">i = <span class="number">1</span></span><br><span class="line">x = <span class="number">0</span> <span class="comment"># 当前纵坐标</span></span><br><span class="line">y = <span class="number">0</span> <span class="comment"># 当前横坐标</span></span><br><span class="line"><span class="keyword">while</span> i < n * m:</span><br><span class="line"></span><br><span class="line"> <span class="keyword">while</span> y < m <span class="keyword">and</span> vis[x][y] == <span class="number">0</span>:</span><br><span class="line"> ansList[x][y] = i</span><br><span class="line"> vis[x][y] = <span class="number">1</span></span><br><span class="line"> i += <span class="number">1</span></span><br><span class="line"> y += <span class="number">1</span></span><br><span class="line"></span><br><span class="line"> y -= <span class="number">1</span></span><br><span class="line"> x += <span class="number">1</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">while</span> x < n <span class="keyword">and</span> vis[x][y] == <span class="number">0</span>:</span><br><span class="line"> ansList[x][y] = i</span><br><span class="line"> vis[x][y] = <span class="number">1</span></span><br><span class="line"> i += <span class="number">1</span></span><br><span class="line"> x += <span class="number">1</span></span><br><span class="line"></span><br><span class="line"> x -= <span class="number">1</span></span><br><span class="line"> y -= <span class="number">1</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">while</span> y >= <span class="number">0</span> <span class="keyword">and</span> vis[x][y] == <span class="number">0</span>:</span><br><span class="line"> ansList[x][y] = i</span><br><span class="line"> vis[x][y] = <span class="number">1</span></span><br><span class="line"> i += <span class="number">1</span></span><br><span class="line"> y -= <span class="number">1</span></span><br><span class="line"></span><br><span class="line"> y += <span class="number">1</span></span><br><span class="line"> x -= <span class="number">1</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">while</span> x >= <span class="number">0</span> <span class="keyword">and</span> vis[x][y] == <span class="number">0</span>:</span><br><span class="line"> ansList[x][y] = i</span><br><span class="line"> vis[x][y] = <span class="number">1</span></span><br><span class="line"> i += <span class="number">1</span></span><br><span class="line"> x -= <span class="number">1</span></span><br><span class="line"> x += <span class="number">1</span></span><br><span class="line"> y += <span class="number">1</span></span><br><span class="line">print(ansList[r<span class="number">-1</span>][c<span class="number">-1</span>])</span><br></pre></td></tr></table></figure></p><h3 id="编程题5-村庄通电"><a href="#编程题5-村庄通电" class="headerlink" title="编程题5 村庄通电"></a>编程题5 村庄通电</h3><p>问题描述<br> 2015年,全中国实现了户户通电。作为一名电力建设者,小明正在帮助一带一路上的国家通电。<br> 这一次,小明要帮助 n 个村庄通电,其中 1 号村庄正好可以建立一个发电站,所发的电足够所有村庄使用。<br> 现在,这 n 个村庄之间都没有电线相连,小明主要要做的是架设电线连接这些村庄,使得所有村庄都直接或间接的与发电站相通。<br> 小明测量了所有村庄的位置(坐标)和高度,如果要连接两个村庄,小明需要花费两个村庄之间的坐标距离加上高度差的平方,形式化描述为坐标为 (x_1, y_1) 高度为 h_1 的村庄与坐标为 (x_2, y_2) 高度为 h_2 的村庄之间连接的费用为<br> sqrt((x_1-x_2)<em>(x_1-x_2)+(y_1-y_2)</em>(y_1-y_2))+(h_1-h_2)*(h_1-h_2)。<br> 在上式中 sqrt 表示取括号内的平方根。请注意括号的位置,高度的计算方式与横纵坐标的计算方式不同。<br> 由于经费有限,请帮助小明计算他至少要花费多少费用才能使这 n 个村庄都通电。<br>输入格式<br> 输入的第一行包含一个整数 n ,表示村庄的数量。<br> 接下来 n 行,每个三个整数 x, y, h,分别表示一个村庄的横、纵坐标和高度,其中第一个村庄可以建立发电站。<br>输出格式<br> 输出一行,包含一个实数,四舍五入保留 2 位小数,表示答案。<br>样例输入<br>4<br>1 1 3<br>9 9 7<br>8 8 6<br>4 5 4<br>样例输出<br>17.41<br>评测用例规模与约定<br> 对于 30% 的评测用例,1 <= n <= 10;<br> 对于 60% 的评测用例,1 <= n <= 100;<br> 对于所有评测用例,1 <= n <= 1000,0 <= x, y, h <= 10000。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"></span><br></pre></td></tr></table></figure><h3 id="编程题6-小明植树"><a href="#编程题6-小明植树" class="headerlink" title="编程题6 小明植树"></a>编程题6 小明植树</h3><p>问题描述<br> 小明和朋友们一起去郊外植树,他们带了一些在自己实验室精心研究出的小树苗。<br> 小明和朋友们一共有 n 个人,他们经过精心挑选,在一块空地上每个人挑选了一个适合植树的位置,总共 n 个。他们准备把自己带的树苗都植下去。<br> 然而,他们遇到了一个困难:有的树苗比较大,而有的位置挨太近,导致两棵树植下去后会撞在一起。<br> 他们将树看成一个圆,圆心在他们找的位置上。如果两棵树对应的圆相交,这两棵树就不适合同时植下(相切不受影响),称为两棵树冲突。<br> 小明和朋友们决定先合计合计,只将其中的一部分树植下去,保证没有互相冲突的树。他们同时希望这些树所能覆盖的面积和(圆面积和)最大。<br>输入格式<br> 输入的第一行包含一个整数 n ,表示人数,即准备植树的位置数。<br> 接下来 n 行,每行三个整数 x, y, r,表示一棵树在空地上的横、纵坐标和半径。<br>输出格式<br> 输出一行包含一个整数,表示在不冲突下可以植树的面积和。由于每棵树的面积都是圆周率的整数倍,请输出答案除以圆周率后的值(应当是一个整数)。<br>样例输入<br>6<br>1 1 2<br>1 4 2<br>1 7 2<br>4 1 2<br>4 4 2<br>4 7 2<br>样例输出<br>12<br>评测用例规模与约定<br> 对于 30% 的评测用例,1 <= n <= 10;<br> 对于 60% 的评测用例,1 <= n <= 20;<br> 对于所有评测用例,1 <= n <= 30,0 <= x, y <= 1000,1 <= r <= 1000。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">isTure</span><span class="params">(i)</span>:</span></span><br><span class="line"> <span class="keyword">for</span> j <span class="keyword">in</span> range(n):</span><br><span class="line"> <span class="keyword">if</span> i != j <span class="keyword">and</span> vis[j]:</span><br><span class="line"> <span class="keyword">if</span> (x[i] - x[j]) * (x[i] - x[j]) + (y[i] - y[j]) * (y[i] - y[j]) < (r[i] + r[j]) * (r[i] + r[j]):</span><br><span class="line"> <span class="keyword">return</span> <span class="literal">False</span></span><br><span class="line"> <span class="keyword">return</span> <span class="literal">True</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">dfs</span><span class="params">(step, sum)</span>:</span></span><br><span class="line"> <span class="keyword">global</span> ans</span><br><span class="line"> <span class="keyword">if</span> step == n:</span><br><span class="line"> ans = max(ans, sum)</span><br><span class="line"> <span class="keyword">return</span></span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(n):</span><br><span class="line"> <span class="keyword">if</span> vis[i] == <span class="number">0</span>:</span><br><span class="line"> tmp = r[i]</span><br><span class="line"> <span class="keyword">if</span> isTure(i) == <span class="literal">False</span>:</span><br><span class="line"> r[i] = <span class="number">0</span></span><br><span class="line"> vis[i] = <span class="number">1</span></span><br><span class="line"> dfs(step + <span class="number">1</span>, sum + r[i] * r[i])</span><br><span class="line"> vis[i] = <span class="number">0</span></span><br><span class="line"> r[i] = tmp</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> __name__ == <span class="string">'__main__'</span>:</span><br><span class="line"> PI = <span class="number">3.14</span></span><br><span class="line"> ans = <span class="number">0</span></span><br><span class="line"> x = []</span><br><span class="line"> y = []</span><br><span class="line"> r = []</span><br><span class="line"> n = int(input())</span><br><span class="line"> vis = [<span class="number">0</span> <span class="keyword">for</span> _ <span class="keyword">in</span> range(n)]</span><br><span class="line"> <span class="keyword">for</span> _ <span class="keyword">in</span> range(n):</span><br><span class="line"> xt, yt, rt = map(int, input().split())</span><br><span class="line"> x.append(xt)</span><br><span class="line"> y.append(yt)</span><br><span class="line"> r.append(rt)</span><br><span class="line"> dfs(<span class="number">0</span>, <span class="number">0</span>)</span><br><span class="line"></span><br><span class="line"> print(ans)</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
<h2 id="目录"><a href="#目录" class="headerlink" title="目录"></a>目录</h2><h3 id="填空题1"><a href="#填空题1" class="headerlink" title="填空题1"></a>填空题1</h
</summary>
<category term="Python算法" scheme="https://plutoacharon.github.io/categories/Python%E7%AE%97%E6%B3%95/"/>
<category term="Python算法" scheme="https://plutoacharon.github.io/tags/Python%E7%AE%97%E6%B3%95/"/>
</entry>
<entry>
<title>HA高可用与负载均衡入门到实战(六)---- 基于Docker配置KeepAlive支持Nginx高可用</title>
<link href="https://plutoacharon.github.io/2020/04/21/HA%E9%AB%98%E5%8F%AF%E7%94%A8%E4%B8%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E6%88%98%EF%BC%88%E5%85%AD%EF%BC%89-%E5%9F%BA%E4%BA%8EDocker%E9%85%8D%E7%BD%AEKeepAlive%E6%94%AF%E6%8C%81Nginx%E9%AB%98%E5%8F%AF%E7%94%A8/"/>
<id>https://plutoacharon.github.io/2020/04/21/HA高可用与负载均衡入门到实战(六)-基于Docker配置KeepAlive支持Nginx高可用/</id>
<published>2020-04-21T09:52:59.000Z</published>
<updated>2020-04-21T09:54:38.065Z</updated>
<content type="html"><![CDATA[<h2 id="网站架构"><a href="#网站架构" class="headerlink" title="网站架构"></a>网站架构</h2><p>基于Docker容器里构建高并发网站</p><p>拓扑图:<br><img src="https://img-blog.csdnimg.cn/20200416115629660.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>上文讲述了简单的基于Docker的配置Nginx反向代理和负载均衡</p><p>本文讲述Keepalived与Nginx共同实现高可用实例<br>|IP地址 | 容器名 |功能|<br>|–|–|–|<br>| 172.18.0.11| nginx1| nginx+keepalived |<br>| 172.18.0.12|nginx2| nginx+keepalived |<br>| 172.18.0.10|VIP| |</p><h2 id="安装配置keepalived"><a href="#安装配置keepalived" class="headerlink" title="安装配置keepalived"></a>安装配置keepalived</h2><h3 id="使用nginx镜像生成nginx-keep镜像"><a href="#使用nginx镜像生成nginx-keep镜像" class="headerlink" title="使用nginx镜像生成nginx-keep镜像"></a>使用nginx镜像生成nginx-keep镜像</h3><p>1) 启动nginx容器并进入<br><code>docker run -d --privileged nginx /usr/sbin/init</code></p><p>2) 在nginx容器中使用yum方式安装keepalived<br><code>yum install -y keepalived</code><br>3) 保存容器为镜像<br><code>docker commit 容器ID nginx-keep</code></p><h3 id="使用nginx-keep镜像启动nginx1和nginx2两个容器"><a href="#使用nginx-keep镜像启动nginx1和nginx2两个容器" class="headerlink" title="使用nginx-keep镜像启动nginx1和nginx2两个容器"></a>使用nginx-keep镜像启动nginx1和nginx2两个容器</h3><p>1) 创建docker网络<br> <code>docker network create --subnet=172.18.0.0/16 cluster</code><br>2) 查看宿主机上的docker网络类型种类<br><code>docker network ls</code><br>3) 启动容器nginx1,设定地址为172.18.0.11<br><code>docker run -d --privileged --net cluster --ip 172.18.0.11 --name nginx1 nginx-keep /usr/sbin/init</code><br>4) 启动容器nginx2,设定地址为172.18.0.12<br><code>docker run -d --privileged --net cluster --ip 172.18.0.12 --name nginx2 nginx-keep /usr/sbin/init</code></p><p>5) 配置容器nginx1, nginx2的web服务,编辑首页内容为“nginx1”,“nginx2”, 在宿主机访问<br> <figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.12</span></span><br><span class="line">nginx2</span><br><span class="line"></span><br><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.11</span></span><br><span class="line">nginx1</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h3 id="在nginx1和nginx2两个容器配置keepalived"><a href="#在nginx1和nginx2两个容器配置keepalived" class="headerlink" title="在nginx1和nginx2两个容器配置keepalived"></a>在nginx1和nginx2两个容器配置keepalived</h3><p>1) 在nginx1编辑 /etc/keepalived/keepalived.conf ,启动keepalived服务<br> <figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br></pre></td><td class="code"><pre><span class="line"> ! Configuration File for keepalived</span><br><span class="line"></span><br><span class="line">global_defs {</span><br><span class="line"> notification_email {</span><br><span class="line"> acassen@firewall.loc</span><br><span class="line"> failover@firewall.loc</span><br><span class="line"> sysadmin@firewall.loc</span><br><span class="line"> }</span><br><span class="line"> notification_email_from Alexandre.Cassen@firewall.loc</span><br><span class="line"> smtp_server 192.168.200.1</span><br><span class="line"> smtp_connect_timeout 30</span><br><span class="line"> router_id nginx1</span><br><span class="line"> vrrp_skip_check_adv_addr</span><br><span class="line"> #vrrp_strict</span><br><span class="line"> vrrp_garp_interval 0</span><br><span class="line"> vrrp_gna_interval 0</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">vrrp_instance VI_1 {</span><br><span class="line"> state MASTER</span><br><span class="line"> interface eth0</span><br><span class="line"> virtual_router_id 51</span><br><span class="line"> priority 100</span><br><span class="line"> advert_int 1</span><br><span class="line"> authentication {</span><br><span class="line"> auth_type PASS</span><br><span class="line"> auth_pass 1111</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> virtual_ipaddress {</span><br><span class="line"> 172.18.0.10</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>2) 在nginx2编辑 /etc/keepalived/keepalived.conf ,启动keepalived服务<br> <figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br></pre></td><td class="code"><pre><span class="line"> ! Configuration File for keepalived</span><br><span class="line"></span><br><span class="line">global_defs {</span><br><span class="line"> notification_email {</span><br><span class="line"> acassen@firewall.loc</span><br><span class="line"> failover@firewall.loc</span><br><span class="line"> sysadmin@firewall.loc</span><br><span class="line"> }</span><br><span class="line"> notification_email_from Alexandre.Cassen@firewall.loc</span><br><span class="line"> smtp_server 192.168.200.1</span><br><span class="line"> smtp_connect_timeout 30</span><br><span class="line"> router_id nginx2</span><br><span class="line"> vrrp_skip_check_adv_addr</span><br><span class="line"> #vrrp_strict</span><br><span class="line"> vrrp_garp_interval 0</span><br><span class="line"> vrrp_gna_interval 0</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">vrrp_instance VI_1 {</span><br><span class="line"> state BACKUP</span><br><span class="line"> interface eth0</span><br><span class="line"> virtual_router_id 51</span><br><span class="line"> priority 90</span><br><span class="line"> advert_int 1</span><br><span class="line"> authentication {</span><br><span class="line"> auth_type PASS</span><br><span class="line"> auth_pass 1111</span><br><span class="line"> }</span><br><span class="line"> virtual_ipaddress {</span><br><span class="line"> 172.18.0.10</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p><strong>注意:</strong><br>在 <code>/etc/keepalived/keepalived.conf</code>配置文件中将<code>#vrrp_strict</code>注释掉, 否则会出现ping VIP不通的现象<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">vrrp_strict</span><br><span class="line">#严格遵守VRRP协议。 这将禁止:</span><br><span class="line"></span><br><span class="line">0 VIPs</span><br><span class="line">unicast peers (单播对等体)</span><br><span class="line">IPv6 addresses in VRRP version 2(VRRP版本2中的IPv6地址)</span><br></pre></td></tr></table></figure></p><blockquote><p>即vrrp_strict:严格遵守VRRP协议。下列情况将会阻止启动Keepalived:1. 没有VIP地址。2. 单播邻居。3. 在VRRP版本2中有IPv6地址。</p></blockquote><p>3) 在宿主机使用浏览器访问虚拟地址<br><code>curl http:// 172.18.0.10</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx1</span><br></pre></td></tr></table></figure></p><p>4) 在nginx1上当掉网卡<br><code>ifconfig eth0 down</code></p><p>5) 在宿主机使用浏览器访问虚拟地址<br><code>curl http:// 172.18.0.10</code><br> <figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx2</span><br></pre></td></tr></table></figure></p><h2 id="配置keepalived-支持nginx高可用"><a href="#配置keepalived-支持nginx高可用" class="headerlink" title="配置keepalived 支持nginx高可用"></a>配置keepalived 支持nginx高可用</h2><h3 id="编写-Nginx-状态检测脚本"><a href="#编写-Nginx-状态检测脚本" class="headerlink" title="编写 Nginx 状态检测脚本"></a>编写 Nginx 状态检测脚本</h3><p>1) 在nginx1上编写 Nginx 状态检测脚本<code>/etc/keepalived/nginx_check.sh</code></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#!/bin/bash</span></span><br><span class="line"><span class="keyword">if</span> [ `ps -C nginx --no-header |wc -l` -eq 0 ]</span><br><span class="line"> <span class="keyword">then</span></span><br><span class="line"> systemctl start nginx </span><br><span class="line"> sleep 2</span><br><span class="line"> <span class="keyword">if</span> [ `ps -C nginx --no-header |wc -l` -eq 0 ]</span><br><span class="line"> <span class="keyword">then</span></span><br><span class="line"> <span class="built_in">kill</span> keepalived</span><br><span class="line"> <span class="keyword">fi</span></span><br><span class="line"><span class="keyword">fi</span></span><br></pre></td></tr></table></figure><blockquote><p>脚本说明: 当检测nginx没有进程时选择启动nginx, 如果启动失败则关闭keepalived<br>2) 赋予/etc/keepalived/nginx_check.sh执行权限<br> <code>chmod a+x /etc/keepalived/nginx_check.sh</code></p></blockquote><h3 id="配置keepalived-支持nginx高可用-1"><a href="#配置keepalived-支持nginx高可用-1" class="headerlink" title="配置keepalived 支持nginx高可用"></a>配置keepalived 支持nginx高可用</h3><p>1) 在nginx1上编辑/etc/keepalived/keepalived.conf<br> <figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br></pre></td><td class="code"><pre><span class="line">! Configuration File for keepalived</span><br><span class="line"></span><br><span class="line">global_defs {</span><br><span class="line"> notification_email {</span><br><span class="line"> acassen@firewall.loc</span><br><span class="line"> failover@firewall.loc</span><br><span class="line"> sysadmin@firewall.loc</span><br><span class="line"> }</span><br><span class="line"> notification_email_from Alexandre.Cassen@firewall.loc</span><br><span class="line"> smtp_server 192.168.200.1</span><br><span class="line"> smtp_connect_timeout 30</span><br><span class="line"> router_id nginx1</span><br><span class="line"> vrrp_skip_check_adv_addr</span><br><span class="line"> #vrrp_strict</span><br><span class="line"> vrrp_garp_interval 0</span><br><span class="line"> vrrp_gna_interval 0</span><br><span class="line">}</span><br><span class="line">vrrp_script chk_nginx{</span><br><span class="line"> script "/etc/keepalived/nginx_check.sh"</span><br><span class="line"> interval 2</span><br><span class="line"> weight -20</span><br><span class="line">}</span><br><span class="line">vrrp_instance VI_1 {</span><br><span class="line"> state MASTER</span><br><span class="line"> interface eth0</span><br><span class="line"> virtual_router_id 51</span><br><span class="line"> priority 100</span><br><span class="line"> advert_int 1</span><br><span class="line"> authentication {</span><br><span class="line"> auth_type PASS</span><br><span class="line"> auth_pass 1111</span><br><span class="line"> }</span><br><span class="line"> track_script{</span><br><span class="line"> chk_nginx</span><br><span class="line">}</span><br><span class="line"> virtual_ipaddress {</span><br><span class="line"> 172.18.0.10</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>2) 重新启动keepalived,在主机使用浏览器访问虚拟地址<br> <figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx1</span><br></pre></td></tr></table></figure></p><p>3) 在nginx1停止nginx服务,在主机使用浏览器访问虚拟地址<br> <figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.10</span></span><br><span class="line">nginx2</span><br></pre></td></tr></table></figure></p><blockquote><p>原因: weight -20 每当运行一次vrrp_script chk_nginx脚本, 本机的权重减20</p></blockquote>]]></content>
<summary type="html">
<h2 id="网站架构"><a href="#网站架构" class="headerlink" title="网站架构"></a>网站架构</h2><p>基于Docker容器里构建高并发网站</p>
<p>拓扑图:<br><img src="https://img-blog.c
</summary>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/categories/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/tags/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="Docker" scheme="https://plutoacharon.github.io/tags/Docker/"/>
</entry>
<entry>
<title>解决Kubernetes1.15.1 部署Flannel网络后pod及容器无法跨主机互通问题</title>
<link href="https://plutoacharon.github.io/2020/04/21/%E8%A7%A3%E5%86%B3Kubernetes1-15-1-%E9%83%A8%E7%BD%B2Flannel%E7%BD%91%E7%BB%9C%E5%90%8Epod%E5%8F%8A%E5%AE%B9%E5%99%A8%E6%97%A0%E6%B3%95%E8%B7%A8%E4%B8%BB%E6%9C%BA%E4%BA%92%E9%80%9A%E9%97%AE%E9%A2%98/"/>
<id>https://plutoacharon.github.io/2020/04/21/解决Kubernetes1-15-1-部署Flannel网络后pod及容器无法跨主机互通问题/</id>
<published>2020-04-21T09:50:39.000Z</published>
<updated>2020-04-21T09:50:53.251Z</updated>
<content type="html"><![CDATA[<p>记一次部署Flannel网络后网络不通问题, 查询网上资料无果</p><p>自己记录一下解决过程</p><h2 id="现象"><a href="#现象" class="headerlink" title="现象"></a>现象</h2><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get pod -n kube-system</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">coredns-5c98db65d4-54j5c 1/1 Running 0 5h44m</span><br><span class="line">coredns-5c98db65d4-jmvbf 1/1 Running 0 5h45m</span><br><span class="line">etcd-k8s-master01 1/1 Running 2 10d</span><br><span class="line">kube-apiserver-k8s-master01 1/1 Running 2 10d</span><br><span class="line">kube-controller-manager-k8s-master01 1/1 Running 3 10d</span><br><span class="line">kube-flannel-ds-amd64-6h79p 1/1 Running 2 9d</span><br><span class="line">kube-flannel-ds-amd64-bnvtd 1/1 Running 3 10d</span><br><span class="line">kube-flannel-ds-amd64-bsq4j 1/1 Running 2 9d</span><br><span class="line">kube-proxy-5fn9m 1/1 Running 1 9d</span><br><span class="line">kube-proxy-6hjvp 1/1 Running 2 9d</span><br><span class="line">kube-proxy-t47n9 1/1 Running 2 10d</span><br><span class="line">kube-scheduler-k8s-master01 1/1 Running 4 10d</span><br><span class="line">kubernetes-dashboard-7d75c474bb-hg7zt 1/1 Running 0 71m</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get node</span></span><br><span class="line">NAME STATUS ROLES AGE VERSION</span><br><span class="line">k8s-master01 Ready master 10d v1.15.1</span><br><span class="line">k8s-node01 Ready <none> 9d v1.15.1</span><br><span class="line">k8s-node02 Ready <none> 9d v1.15.1</span><br></pre></td></tr></table></figure><p>由以上可以看到我部署Flannel以后, master检测到node节点 并且flannel容器显示<code>Running</code>正常</p><h2 id="排查问题"><a href="#排查问题" class="headerlink" title="排查问题"></a>排查问题</h2><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># ip a</span></span><br><span class="line">1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1</span><br><span class="line"> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00</span><br><span class="line"> inet 127.0.0.1/8 scope host lo</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line">2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000</span><br><span class="line"> link/ether 00:0c:29:2c:d1:c2 brd ff:ff:ff:ff:ff:ff</span><br><span class="line"> inet 192.168.0.50/24 brd 192.168.0.255 scope global noprefixroute ens33</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line"> inet6 fe80::20c:29ff:fe2c:d1c2/64 scope link </span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line">3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default </span><br><span class="line"> link/ether 02:42:1f:d8:95:21 brd ff:ff:ff:ff:ff:ff</span><br><span class="line"> inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line">4: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1000</span><br><span class="line"> link/ether ee:02:3a:98:e3:e3 brd ff:ff:ff:ff:ff:ff</span><br><span class="line">5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default </span><br><span class="line"> link/ether d2:c2:72:50:95:31 brd ff:ff:ff:ff:ff:ff</span><br><span class="line"> inet 10.96.0.10/32 brd 10.96.0.10 scope global kube-ipvs0</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line"> inet 10.96.0.1/32 brd 10.96.0.1 scope global kube-ipvs0</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line"> inet 10.110.65.174/32 brd 10.110.65.174 scope global kube-ipvs0</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br><span class="line">6: flannel.1: <BROADCAST,MULTICAST> mtu 1450 qdisc noqueue state DOWN group default </span><br><span class="line"> link/ether 7e:35:6d:f9:50:c3 brd ff:ff:ff:ff:ff:ff</span><br><span class="line">7: cni0: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000</span><br><span class="line"> link/ether 8a:1b:ab:4c:83:c9 brd ff:ff:ff:ff:ff:ff</span><br><span class="line"> inet 10.244.0.1/24 scope global cni0</span><br><span class="line"> valid_lft forever preferred_lft forever</span><br></pre></td></tr></table></figure><p><code>6: flannel.1</code>网络没有ip信息, 并且显示<code>DOWN</code>的状态</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># ping 10.244.2.6</span></span><br><span class="line">PING 10.244.2.6 (10.244.2.6) 56(84) bytes of data.</span><br><span class="line">^C</span><br><span class="line">--- 10.244.2.6 ping statistics ---</span><br><span class="line">13 packets transmitted, 0 received, 100% packet loss, time 12004ms</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node01 ~]<span class="comment"># ping 10.244.2.6</span></span><br><span class="line">PING 10.244.2.6 (10.244.2.6) 56(84) bytes of data.</span><br><span class="line">^C</span><br><span class="line">--- 10.244.2.6 ping statistics ---</span><br><span class="line">36 packets transmitted, 0 received, 100% packet loss, time 35012ms</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node02 ~]<span class="comment"># ping 10.244.2.6</span></span><br><span class="line">PING 10.244.2.6 (10.244.2.6) 56(84) bytes of data.</span><br><span class="line">64 bytes from 10.244.2.6: icmp_seq=1 ttl=64 time=0.131 ms</span><br><span class="line">64 bytes from 10.244.2.6: icmp_seq=2 ttl=64 time=0.042 ms</span><br><span class="line">^C</span><br><span class="line">--- 10.244.2.6 ping statistics ---</span><br><span class="line">2 packets transmitted, 2 received, 0% packet loss, time 999ms</span><br></pre></td></tr></table></figure><p>一个存在与node2的pod只有node2能ping 通, 其他节点全部超时</p><h2 id="解决"><a href="#解决" class="headerlink" title="解决"></a>解决</h2><h3 id="方法1"><a href="#方法1" class="headerlink" title="方法1"></a>方法1</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node01 ~]<span class="comment"># sudo iptables -P INPUT ACCEPT</span></span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># sudo iptables -P OUTPUT ACCEPT</span></span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># sudo iptables -P FORWARD ACCEPT</span></span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># iptables -L -n</span></span><br><span class="line">Chain INPUT (policy ACCEPT)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line">KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0 </span><br><span class="line"></span><br><span class="line">Chain FORWARD (policy ACCEPT)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line">KUBE-FORWARD all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */</span><br><span class="line">ACCEPT all -- 10.244.0.0/16 0.0.0.0/0 </span><br><span class="line">ACCEPT all -- 0.0.0.0/0 10.244.0.0/16 </span><br><span class="line"></span><br><span class="line">Chain OUTPUT (policy ACCEPT)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line">KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0 </span><br><span class="line"></span><br><span class="line">Chain DOCKER (0 references)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line"></span><br><span class="line">Chain DOCKER-ISOLATION-STAGE-1 (0 references)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line"></span><br><span class="line">Chain DOCKER-ISOLATION-STAGE-2 (0 references)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line"></span><br><span class="line">Chain DOCKER-USER (0 references)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line"></span><br><span class="line">Chain KUBE-FIREWALL (2 references)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line">DROP all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes firewall <span class="keyword">for</span> dropping marked packets */ mark match 0x8000/0x8000</span><br><span class="line"></span><br><span class="line">Chain KUBE-FORWARD (1 references)</span><br><span class="line">target prot opt <span class="built_in">source</span> destination </span><br><span class="line">ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */ mark match 0x4000/0x4000</span><br><span class="line">ACCEPT all -- 10.244.0.0/16 0.0.0.0/0 /* kubernetes forwarding conntrack pod <span class="built_in">source</span> rule */ ctstate RELATED,ESTABLISHED</span><br><span class="line">ACCEPT all -- 0.0.0.0/0 10.244.0.0/16 /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED</span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># service iptables save</span></span><br><span class="line">iptables: Saving firewall rules to /etc/sysconfig/iptables:[ 确定 ]</span><br></pre></td></tr></table></figure><p>清理<code>IPTABLES</code>规则, 保存<br>问题没有解决 使用方法二</p><h3 id="方法2"><a href="#方法2" class="headerlink" title="方法2"></a>方法2</h3><p>卸载flannel网络<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">#第一步,在master节点删除flannel</span></span><br><span class="line">kubectl delete -f kube-flannel.yml</span><br><span class="line"></span><br><span class="line"><span class="comment">#第二步,在node节点清理flannel网络留下的文件</span></span><br><span class="line">ifconfig cni0 down</span><br><span class="line">ip link delete cni0</span><br><span class="line">ifconfig flannel.1 down</span><br><span class="line">ip link delete flannel.1</span><br><span class="line">rm -rf /var/lib/cni/</span><br><span class="line">rm -f /etc/cni/net.d/*</span><br></pre></td></tr></table></figure></p><p>重新部署Flannel网络<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl create -f kube-flannel.yml </span></span><br><span class="line">podsecuritypolicy.policy/psp.flannel.unprivileged created</span><br><span class="line">clusterrole.rbac.authorization.k8s.io/flannel created</span><br><span class="line">clusterrolebinding.rbac.authorization.k8s.io/flannel created</span><br><span class="line">serviceaccount/flannel created</span><br><span class="line">configmap/kube-flannel-cfg created</span><br><span class="line">daemonset.apps/kube-flannel-ds-amd64 created</span><br><span class="line">daemonset.apps/kube-flannel-ds-arm64 created</span><br><span class="line">daemonset.apps/kube-flannel-ds-arm created</span><br><span class="line">daemonset.apps/kube-flannel-ds-ppc64le created</span><br><span class="line">daemonset.apps/kube-flannel-ds-s390x created</span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl get pod -n kube-system</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">coredns-5c98db65d4-8bpdd 1/1 Running 0 17s</span><br><span class="line">coredns-5c98db65d4-knfcj 1/1 Running 0 43s</span><br><span class="line">etcd-k8s-master01 1/1 Running 2 10d</span><br><span class="line">kube-apiserver-k8s-master01 1/1 Running 2 10d</span><br><span class="line">kube-controller-manager-k8s-master01 1/1 Running 3 10d</span><br><span class="line">kube-flannel-ds-amd64-56hsf 1/1 Running 0 25m</span><br><span class="line">kube-flannel-ds-amd64-56t49 1/1 Running 0 25m</span><br><span class="line">kube-flannel-ds-amd64-qz42z 1/1 Running 0 25m</span><br><span class="line">kube-proxy-5fn9m 1/1 Running 1 10d</span><br><span class="line">kube-proxy-6hjvp 1/1 Running 2 10d</span><br><span class="line">kube-proxy-t47n9 1/1 Running 2 10d</span><br><span class="line">kube-scheduler-k8s-master01 1/1 Running 4 10d</span><br><span class="line">kubernetes-dashboard-7d75c474bb-4r7hc 1/1 Running 0 23m</span><br><span class="line">[root@k8s-master01 flannel]<span class="comment">#</span></span><br></pre></td></tr></table></figure><p>重新部署Flannel网络后 容器需要重置, 删除就可以 k8s会重新自动添加<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># ping 10.244.1.2</span></span><br><span class="line">PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=1 ttl=63 time=1.04 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=2 ttl=63 time=0.498 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=3 ttl=63 time=0.575 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=4 ttl=63 time=0.578 ms</span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node01 ~]<span class="comment"># ping 10.244.1.2</span></span><br><span class="line">PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=1 ttl=64 time=0.065 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=2 ttl=64 time=0.038 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=3 ttl=64 time=0.135 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=4 ttl=64 time=0.058 ms</span><br><span class="line">^C</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node02 ~]<span class="comment"># ping 10.244.1.2</span></span><br><span class="line">PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=1 ttl=63 time=0.760 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=2 ttl=63 time=0.510 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=3 ttl=63 time=0.442 ms</span><br><span class="line">64 bytes from 10.244.1.2: icmp_seq=4 ttl=63 time=0.525 ms</span><br><span class="line">^C</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># ifconfig </span></span><br><span class="line">docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500</span><br><span class="line"> inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255</span><br><span class="line"> ether 02:42:1f:d8:95:21 txqueuelen 0 (Ethernet)</span><br><span class="line"> RX packets 0 bytes 0 (0.0 B)</span><br><span class="line"> RX errors 0 dropped 0 overruns 0 frame 0</span><br><span class="line"> TX packets 0 bytes 0 (0.0 B)</span><br><span class="line"> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</span><br><span class="line"></span><br><span class="line">ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500</span><br><span class="line"> inet 192.168.0.50 netmask 255.255.255.0 broadcast 192.168.0.255</span><br><span class="line"> inet6 fe80::20c:29ff:fe2c:d1c2 prefixlen 64 scopeid 0x20<link></span><br><span class="line"> ether 00:0c:29:2c:d1:c2 txqueuelen 1000 (Ethernet)</span><br><span class="line"> RX packets 737868 bytes 493443231 (470.5 MiB)</span><br><span class="line"> RX errors 0 dropped 0 overruns 0 frame 0</span><br><span class="line"> TX packets 1656623 bytes 3510224771 (3.2 GiB)</span><br><span class="line"> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</span><br><span class="line"></span><br><span class="line">flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450</span><br><span class="line"> inet 10.244.0.0 netmask 255.255.255.255 broadcast 0.0.0.0</span><br><span class="line"> ether aa:50:d6:f9:09:e5 txqueuelen 0 (Ethernet)</span><br><span class="line"> RX packets 14 bytes 1728 (1.6 KiB)</span><br><span class="line"> RX errors 0 dropped 0 overruns 0 frame 0</span><br><span class="line"> TX packets 67 bytes 5973 (5.8 KiB)</span><br><span class="line"> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</span><br><span class="line"></span><br><span class="line">lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536</span><br><span class="line"> inet 127.0.0.1 netmask 255.0.0.0</span><br><span class="line"> loop txqueuelen 1 (Local Loopback)</span><br><span class="line"> RX packets 6944750 bytes 1242999056 (1.1 GiB)</span><br><span class="line"> RX errors 0 dropped 0 overruns 0 frame 0</span><br><span class="line"> TX packets 6944750 bytes 1242999056 (1.1 GiB)</span><br><span class="line"> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</span><br><span class="line"></span><br><span class="line">[root@k8s-master01 flannel]<span class="comment">#</span></span><br></pre></td></tr></table></figure><p>flannel网络显示正常, 容器之间可以跨主机互通!</p>]]></content>
<summary type="html">
<p>记一次部署Flannel网络后网络不通问题, 查询网上资料无果</p>
<p>自己记录一下解决过程</p>
<h2 id="现象"><a href="#现象" class="headerlink" title="现象"></a>现象</h2><figure class="h
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>Kubernetes(K8s)入门到实践(五)----Kubernetes1.15.1安装 Dashboard 的WEB UI插件</title>
<link href="https://plutoacharon.github.io/2020/04/21/Kubernetes-K8s-%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E8%B7%B5-%E4%BA%94-Kubernetes1-15-1%E5%AE%89%E8%A3%85-Dashboard-%E7%9A%84WEB-UI%E6%8F%92%E4%BB%B6/"/>
<id>https://plutoacharon.github.io/2020/04/21/Kubernetes-K8s-入门到实践-五-Kubernetes1-15-1安装-Dashboard-的WEB-UI插件/</id>
<published>2020-04-21T09:50:00.000Z</published>
<updated>2020-04-21T09:50:16.811Z</updated>
<content type="html"><![CDATA[<p>上节讲解了通过kubeadm 搭建集群kubeadm1.15.1环境,现在的集群已经搭建成功了,今天给大家展示Kubernetes Dashboard 插件的安装</p><h2 id="下载官方的yaml文件"><a href="#下载官方的yaml文件" class="headerlink" title="下载官方的yaml文件"></a>下载官方的yaml文件</h2><p>进入官网:<code>https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml</span><br></pre></td></tr></table></figure></p><p> 修改:<br> type,指定端口类型为 NodePort,这样外界可以通过地址 nodeIP:nodePort 访问 dashboard<br> <img src="https://img-blog.csdnimg.cn/20200413184310625.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>如果网络不好,不能直接下载,需要手动创建kubernetes-dashboard.yaml文件<br><figure class="highlight yaml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br><span class="line">152</span><br><span class="line">153</span><br><span class="line">154</span><br><span class="line">155</span><br><span class="line">156</span><br><span class="line">157</span><br><span class="line">158</span><br><span class="line">159</span><br><span class="line">160</span><br><span class="line">161</span><br><span class="line">162</span><br><span class="line">163</span><br><span class="line">164</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># Copyright 2017 The Kubernetes Authors.</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment"># Licensed under the Apache License, Version 2.0 (the "License");</span></span><br><span class="line"><span class="comment"># you may not use this file except in compliance with the License.</span></span><br><span class="line"><span class="comment"># You may obtain a copy of the License at</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment"># http://www.apache.org/licenses/LICENSE-2.0</span></span><br><span class="line"><span class="comment">#</span></span><br><span class="line"><span class="comment"># Unless required by applicable law or agreed to in writing, software</span></span><br><span class="line"><span class="comment"># distributed under the License is distributed on an "AS IS" BASIS,</span></span><br><span class="line"><span class="comment"># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.</span></span><br><span class="line"><span class="comment"># See the License for the specific language governing permissions and</span></span><br><span class="line"><span class="comment"># limitations under the License.</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># ------------------- Dashboard Secret ------------------- #</span></span><br><span class="line"></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Secret</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard-certs</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"><span class="attr">type:</span> <span class="string">Opaque</span></span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="comment"># ------------------- Dashboard Service Account ------------------- #</span></span><br><span class="line"></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">ServiceAccount</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="comment"># ------------------- Dashboard Role & Role Binding ------------------- #</span></span><br><span class="line"></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Role</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">rbac.authorization.k8s.io/v1</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard-minimal</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"><span class="attr">rules:</span></span><br><span class="line"> <span class="comment"># Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.</span></span><br><span class="line"><span class="attr">- apiGroups:</span> <span class="string">[""]</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="string">["secrets"]</span></span><br><span class="line"><span class="attr"> verbs:</span> <span class="string">["create"]</span></span><br><span class="line"> <span class="comment"># Allow Dashboard to create 'kubernetes-dashboard-settings' config map.</span></span><br><span class="line"><span class="attr">- apiGroups:</span> <span class="string">[""]</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="string">["configmaps"]</span></span><br><span class="line"><span class="attr"> verbs:</span> <span class="string">["create"]</span></span><br><span class="line"> <span class="comment"># Allow Dashboard to get, update and delete Dashboard exclusive secrets.</span></span><br><span class="line"><span class="attr">- apiGroups:</span> <span class="string">[""]</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="string">["secrets"]</span></span><br><span class="line"><span class="attr"> resourceNames:</span> <span class="string">["kubernetes-dashboard-key-holder",</span> <span class="string">"kubernetes-dashboard-certs"</span><span class="string">]</span></span><br><span class="line"><span class="attr"> verbs:</span> <span class="string">["get",</span> <span class="string">"update"</span><span class="string">,</span> <span class="string">"delete"</span><span class="string">]</span></span><br><span class="line"> <span class="comment"># Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.</span></span><br><span class="line"><span class="attr">- apiGroups:</span> <span class="string">[""]</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="string">["configmaps"]</span></span><br><span class="line"><span class="attr"> resourceNames:</span> <span class="string">["kubernetes-dashboard-settings"]</span></span><br><span class="line"><span class="attr"> verbs:</span> <span class="string">["get",</span> <span class="string">"update"</span><span class="string">]</span></span><br><span class="line"> <span class="comment"># Allow Dashboard to get metrics from heapster.</span></span><br><span class="line"><span class="attr">- apiGroups:</span> <span class="string">[""]</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="string">["services"]</span></span><br><span class="line"><span class="attr"> resourceNames:</span> <span class="string">["heapster"]</span></span><br><span class="line"><span class="attr"> verbs:</span> <span class="string">["proxy"]</span></span><br><span class="line"><span class="attr">- apiGroups:</span> <span class="string">[""]</span></span><br><span class="line"><span class="attr"> resources:</span> <span class="string">["services/proxy"]</span></span><br><span class="line"><span class="attr"> resourceNames:</span> <span class="string">["heapster",</span> <span class="string">"http:heapster:"</span><span class="string">,</span> <span class="string">"https:heapster:"</span><span class="string">]</span></span><br><span class="line"><span class="attr"> verbs:</span> <span class="string">["get"]</span></span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">rbac.authorization.k8s.io/v1</span></span><br><span class="line"><span class="attr">kind:</span> <span class="string">RoleBinding</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard-minimal</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"><span class="attr">roleRef:</span></span><br><span class="line"><span class="attr"> apiGroup:</span> <span class="string">rbac.authorization.k8s.io</span></span><br><span class="line"><span class="attr"> kind:</span> <span class="string">Role</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard-minimal</span></span><br><span class="line"><span class="attr">subjects:</span></span><br><span class="line"><span class="attr">- kind:</span> <span class="string">ServiceAccount</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="comment"># ------------------- Dashboard Deployment ------------------- #</span></span><br><span class="line"></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Deployment</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">apps/v1</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr"> replicas:</span> <span class="number">1</span></span><br><span class="line"><span class="attr"> revisionHistoryLimit:</span> <span class="number">10</span></span><br><span class="line"><span class="attr"> selector:</span></span><br><span class="line"><span class="attr"> matchLabels:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> template:</span></span><br><span class="line"><span class="attr"> metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> spec:</span></span><br><span class="line"><span class="attr"> containers:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> image:</span> <span class="string">k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1</span></span><br><span class="line"><span class="attr"> ports:</span></span><br><span class="line"><span class="attr"> - containerPort:</span> <span class="number">8443</span></span><br><span class="line"><span class="attr"> protocol:</span> <span class="string">TCP</span></span><br><span class="line"><span class="attr"> args:</span></span><br><span class="line"><span class="bullet"> -</span> <span class="bullet">--auto-generate-certificates</span></span><br><span class="line"> <span class="comment"># Uncomment the following line to manually specify Kubernetes API server Host</span></span><br><span class="line"> <span class="comment"># If not specified, Dashboard will attempt to auto discover the API server and connect</span></span><br><span class="line"> <span class="comment"># to it. Uncomment only if the default does not work.</span></span><br><span class="line"> <span class="comment"># - --apiserver-host=http://my-address:port</span></span><br><span class="line"><span class="attr"> volumeMounts:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">kubernetes-dashboard-certs</span></span><br><span class="line"><span class="attr"> mountPath:</span> <span class="string">/certs</span></span><br><span class="line"> <span class="comment"># Create on-disk volume to store exec logs</span></span><br><span class="line"><span class="attr"> - mountPath:</span> <span class="string">/tmp</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">tmp-volume</span></span><br><span class="line"><span class="attr"> livenessProbe:</span></span><br><span class="line"><span class="attr"> httpGet:</span></span><br><span class="line"><span class="attr"> scheme:</span> <span class="string">HTTPS</span></span><br><span class="line"><span class="attr"> path:</span> <span class="string">/</span></span><br><span class="line"><span class="attr"> port:</span> <span class="number">8443</span></span><br><span class="line"><span class="attr"> initialDelaySeconds:</span> <span class="number">30</span></span><br><span class="line"><span class="attr"> timeoutSeconds:</span> <span class="number">30</span></span><br><span class="line"><span class="attr"> volumes:</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">kubernetes-dashboard-certs</span></span><br><span class="line"><span class="attr"> secret:</span></span><br><span class="line"><span class="attr"> secretName:</span> <span class="string">kubernetes-dashboard-certs</span></span><br><span class="line"><span class="attr"> - name:</span> <span class="string">tmp-volume</span></span><br><span class="line"><span class="attr"> emptyDir:</span> <span class="string">{}</span></span><br><span class="line"><span class="attr"> serviceAccountName:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"> <span class="comment"># Comment the following tolerations if Dashboard must not be deployed on master</span></span><br><span class="line"><span class="attr"> tolerations:</span></span><br><span class="line"><span class="attr"> - key:</span> <span class="string">node-role.kubernetes.io/master</span></span><br><span class="line"><span class="attr"> effect:</span> <span class="string">NoSchedule</span></span><br><span class="line"></span><br><span class="line"><span class="meta">---</span></span><br><span class="line"><span class="comment"># ------------------- Dashboard Service ------------------- #</span></span><br><span class="line"></span><br><span class="line"><span class="attr">kind:</span> <span class="string">Service</span></span><br><span class="line"><span class="attr">apiVersion:</span> <span class="string">v1</span></span><br><span class="line"><span class="attr">metadata:</span></span><br><span class="line"><span class="attr"> labels:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> name:</span> <span class="string">kubernetes-dashboard</span></span><br><span class="line"><span class="attr"> namespace:</span> <span class="string">kube-system</span></span><br><span class="line"><span class="attr">spec:</span></span><br><span class="line"><span class="attr"> type:</span> <span class="string">NodePort</span></span><br><span class="line"><span class="attr"> ports:</span></span><br><span class="line"><span class="attr"> - port:</span> <span class="number">443</span></span><br><span class="line"><span class="attr"> targetPort:</span> <span class="number">8443</span></span><br><span class="line"><span class="attr"> nodePort:</span> <span class="number">32000</span></span><br><span class="line"><span class="attr"> selector:</span></span><br><span class="line"><span class="attr"> k8s-app:</span> <span class="string">kubernetes-dashboard</span></span><br></pre></td></tr></table></figure></p><h2 id="拉取镜像"><a href="#拉取镜像" class="headerlink" title="拉取镜像"></a>拉取镜像</h2><p>为了避免访问外国网站,这里直接通过国内的阿里镜像拉取,通过tag更改名称<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">docker pull registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1</span><br><span class="line">docker tag registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1</span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node01 ~]<span class="comment"># docker pull registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1</span></span><br><span class="line">v1.10.1: Pulling from rsqlh/kubernetes-dashboard</span><br><span class="line">9518d8afb433: Pull complete </span><br><span class="line">Digest: sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747</span><br><span class="line">Status: Downloaded newer image <span class="keyword">for</span> registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1</span><br><span class="line">registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1</span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># docker tag registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1</span></span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># docker images</span></span><br><span class="line">REPOSITORY TAG IMAGE ID CREATED SIZE</span><br><span class="line">registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard v1.10.1 f9aed6605b81 16 months ago 122MB</span><br><span class="line">k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.1 f9aed6605b81 16 months ago 122MB</span><br><span class="line">[root@k8s-node01 ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure><h2 id="部署yaml文件"><a href="#部署yaml文件" class="headerlink" title="部署yaml文件"></a>部署yaml文件</h2><p>通过<code>kubectl create -f</code>命令部署<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ui]<span class="comment"># kubectl create -f kubernetes-dashboard.yaml </span></span><br><span class="line">secret/kubernetes-dashboard-certs created</span><br><span class="line">serviceaccount/kubernetes-dashboard created</span><br><span class="line">role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created</span><br><span class="line">rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created</span><br><span class="line">deployment.apps/kubernetes-dashboard created</span><br><span class="line">service/kubernetes-dashboard created</span><br><span class="line">[root@k8s-master01 ui]<span class="comment"># kubectl get pod -n kube-system</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">coredns-5c98db65d4-54j5c 1/1 Running 0 3h53m</span><br><span class="line">coredns-5c98db65d4-jmvbf 1/1 Running 0 3h53m</span><br><span class="line">etcd-k8s-master01 1/1 Running 2 10d</span><br><span class="line">kube-apiserver-k8s-master01 1/1 Running 2 10d</span><br><span class="line">kube-controller-manager-k8s-master01 1/1 Running 3 10d</span><br><span class="line">kube-flannel-ds-amd64-6h79p 1/1 Running 2 9d</span><br><span class="line">kube-flannel-ds-amd64-bnvtd 1/1 Running 3 9d</span><br><span class="line">kube-flannel-ds-amd64-bsq4j 1/1 Running 2 9d</span><br><span class="line">kube-proxy-5fn9m 1/1 Running 1 9d</span><br><span class="line">kube-proxy-6hjvp 1/1 Running 2 9d</span><br><span class="line">kube-proxy-t47n9 1/1 Running 2 10d</span><br><span class="line">kube-scheduler-k8s-master01 1/1 Running 4 10d</span><br><span class="line">kubernetes-dashboard-7d75c474bb-zj9c6 1/1 Running 0 18s</span><br><span class="line">[root@k8s-master01 ui]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>可以看到<code>kubernetes-dashboard</code>处于Running状态<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ui]<span class="comment"># kubectl get svc -n kube-system</span></span><br><span class="line">NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE</span><br><span class="line">kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 10d</span><br><span class="line">kubernetes-dashboard NodePort 10.110.65.174 <none> 443:32000/TCP 11m</span><br><span class="line">[root@k8s-master01 ui]<span class="comment"># kubectl get pod -n kube-system -o wide</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES</span><br><span class="line">coredns-5c98db65d4-54j5c 1/1 Running 0 4h5m 10.244.2.5 k8s-node02 <none> <none></span><br><span class="line">coredns-5c98db65d4-jmvbf 1/1 Running 0 4h6m 10.244.1.5 k8s-node01 <none> <none></span><br><span class="line">etcd-k8s-master01 1/1 Running 2 10d 192.168.0.50 k8s-master01 <none> <none></span><br><span class="line">kube-apiserver-k8s-master01 1/1 Running 2 10d 192.168.0.50 k8s-master01 <none> <none></span><br><span class="line">kube-controller-manager-k8s-master01 1/1 Running 3 10d 192.168.0.50 k8s-master01 <none> <none></span><br><span class="line">kube-flannel-ds-amd64-6h79p 1/1 Running 2 9d 192.168.0.52 k8s-node02 <none> <none></span><br><span class="line">kube-flannel-ds-amd64-bnvtd 1/1 Running 3 9d 192.168.0.50 k8s-master01 <none> <none></span><br><span class="line">kube-flannel-ds-amd64-bsq4j 1/1 Running 2 9d 192.168.0.51 k8s-node01 <none> <none></span><br><span class="line">kube-proxy-5fn9m 1/1 Running 1 9d 192.168.0.51 k8s-node01 <none> <none></span><br><span class="line">kube-proxy-6hjvp 1/1 Running 2 9d 192.168.0.52 k8s-node02 <none> <none></span><br><span class="line">kube-proxy-t47n9 1/1 Running 2 10d 192.168.0.50 k8s-master01 <none> <none></span><br><span class="line">kube-scheduler-k8s-master01 1/1 Running 4 10d 192.168.0.50 k8s-master01 <none> <none></span><br><span class="line">kubernetes-dashboard-7d75c474bb-zj9c6 1/1 Running 0 13m 10.244.1.6 k8s-node02 <none> <none></span><br><span class="line">[root@k8s-master01 ui]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>可以看到<code>kubernetes-dashboard</code>暴露在node2上的32000端口</p><h2 id="访问ui页面"><a href="#访问ui页面" class="headerlink" title="访问ui页面"></a>访问ui页面</h2><p><code>https://192.168.0.52:32000/</code> 这是我node2的ip地址<br>建议使用<code>firefox</code>访问, <code>Chrome</code>访问会禁止不安全证书访问<br><img src="https://img-blog.csdnimg.cn/20200413191431640.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200413193104912.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="Token令牌登录"><a href="#Token令牌登录" class="headerlink" title="Token令牌登录"></a>Token令牌登录</h3><ol><li>创建serviceaccount<br><code>kubectl create serviceaccount dashboard-admin -n kube-system</code><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl create serviceaccount dashboard-admin -n kube-system</span></span><br><span class="line">serviceaccount/dashboard-admin created</span><br><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get sa -n kube-system</span></span><br><span class="line">NAME SECRETS AGE</span><br><span class="line">attachdetach-controller 1 10d</span><br><span class="line">bootstrap-signer 1 10d</span><br><span class="line">certificate-controller 1 10d</span><br><span class="line">clusterrole-aggregation-controller 1 10d</span><br><span class="line">coredns 1 10d</span><br><span class="line">cronjob-controller 1 10d</span><br><span class="line">daemon-set-controller 1 10d</span><br><span class="line">dashboard-admin 1 27s</span><br><span class="line">default 1 10d</span><br><span class="line">deployment-controller 1 10d</span><br><span class="line">disruption-controller 1 10d</span><br><span class="line">endpoint-controller 1 10d</span><br><span class="line">expand-controller 1 10d</span><br><span class="line">flannel 1 10d</span><br><span class="line">generic-garbage-collector 1 10d</span><br><span class="line">horizontal-pod-autoscaler 1 10d</span><br><span class="line">job-controller 1 10d</span><br><span class="line">kube-proxy 1 10d</span><br><span class="line">kubernetes-dashboard 1 48m</span><br><span class="line">namespace-controller 1 10d</span><br><span class="line">node-controller 1 10d</span><br><span class="line">persistent-volume-binder 1 10d</span><br><span class="line">pod-garbage-collector 1 10d</span><br><span class="line">pv-protection-controller 1 10d</span><br><span class="line">pvc-protection-controller 1 10d</span><br><span class="line">replicaset-controller 1 10d</span><br><span class="line">replication-controller 1 10d</span><br><span class="line">resourcequota-controller 1 10d</span><br><span class="line">service-account-controller 1 10d</span><br><span class="line">service-controller 1 10d</span><br><span class="line">statefulset-controller 1 10d</span><br><span class="line">token-cleaner 1 10d</span><br><span class="line">ttl-controller 1 10d</span><br><span class="line">[root@k8s-master01 ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></li></ol><p><code>dashboard-admin 1 27s</code>创建成功</p><ol start="2"><li>把serviceaccount绑定在clusteradmin,授权serviceaccount用户具有整个集群的访问管理权限<figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin</span><br></pre></td></tr></table></figure></li></ol><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin</span></span><br><span class="line">clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created</span><br><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get secret -n kube-system</span></span><br><span class="line">NAME TYPE DATA AGE</span><br><span class="line">attachdetach-controller-token-j5vtc kubernetes.io/service-account-token 3 10d</span><br><span class="line">bootstrap-signer-token-prjr2 kubernetes.io/service-account-token 3 10d</span><br><span class="line">certificate-controller-token-f8rjx kubernetes.io/service-account-token 3 10d</span><br><span class="line">clusterrole-aggregation-controller-token-l6lqh kubernetes.io/service-account-token 3 10d</span><br><span class="line">coredns-token-p5z2z kubernetes.io/service-account-token 3 10d</span><br><span class="line">cronjob-controller-token-jsp8k kubernetes.io/service-account-token 3 10d</span><br><span class="line">daemon-set-controller-token-4fh89 kubernetes.io/service-account-token 3 10d</span><br><span class="line">dashboard-admin-token-dl8pf kubernetes.io/service-account-token 3 8m55s</span><br><span class="line">default-token-22jpc kubernetes.io/service-account-token 3 10d</span><br><span class="line">deployment-controller-token-jc4xc kubernetes.io/service-account-token 3 10d</span><br><span class="line">disruption-controller-token-p85cv kubernetes.io/service-account-token 3 10d</span><br><span class="line">endpoint-controller-token-dhk4f kubernetes.io/service-account-token 3 10d</span><br><span class="line">expand-controller-token-lbsrj kubernetes.io/service-account-token 3 10d</span><br><span class="line">flannel-token-qjgks kubernetes.io/service-account-token 3 10d</span><br><span class="line">generic-garbage-collector-token-6fwmg kubernetes.io/service-account-token 3 10d</span><br><span class="line">horizontal-pod-autoscaler-token-vl8dh kubernetes.io/service-account-token 3 10d</span><br><span class="line">job-controller-token-c2sfm kubernetes.io/service-account-token 3 10d</span><br><span class="line">kube-proxy-token-qg465 kubernetes.io/service-account-token 3 10d</span><br><span class="line">kubernetes-dashboard-certs NodePort 0 56m</span><br><span class="line">kubernetes-dashboard-key-holder Opaque 2 56m</span><br><span class="line">kubernetes-dashboard-token-hpg2q kubernetes.io/service-account-token 3 56m</span><br><span class="line">namespace-controller-token-vvbxk kubernetes.io/service-account-token 3 10d</span><br><span class="line">node-controller-token-5hmv6 kubernetes.io/service-account-token 3 10d</span><br><span class="line">persistent-volume-binder-token-6vrk6 kubernetes.io/service-account-token 3 10d</span><br><span class="line">pod-garbage-collector-token-f8bvl kubernetes.io/service-account-token 3 10d</span><br><span class="line">pv-protection-controller-token-pp8bh kubernetes.io/service-account-token 3 10d</span><br><span class="line">pvc-protection-controller-token-jf6lj kubernetes.io/service-account-token 3 10d</span><br><span class="line">replicaset-controller-token-twbw8 kubernetes.io/service-account-token 3 10d</span><br><span class="line">replication-controller-token-lr45r kubernetes.io/service-account-token 3 10d</span><br><span class="line">resourcequota-controller-token-qlgbb kubernetes.io/service-account-token 3 10d</span><br><span class="line">service-account-controller-token-bsqlq kubernetes.io/service-account-token 3 10d</span><br><span class="line">service-controller-token-g6lvs kubernetes.io/service-account-token 3 10d</span><br><span class="line">statefulset-controller-token-h6wrx kubernetes.io/service-account-token 3 10d</span><br><span class="line">token-cleaner-token-wvwbn kubernetes.io/service-account-token 3 10d</span><br><span class="line">ttl-controller-token-z2fm7 kubernetes.io/service-account-token 3 10d</span><br></pre></td></tr></table></figure><ol start="3"><li>获取serviceaccount的secret信息,可得到token(令牌)的信息</li></ol><p><code>kubectl get secret -n kube-system</code></p><p>dashboard-admin-token-slfcr 通过上边命令获取到的<br><code>kubectl describe secret dashboard-admin-token-slfcr -n kube-system</code><br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br></pre></td><td class="code"><pre><span class="line">```bash</span><br><span class="line">[root@k8s-master01 ~]# kubectl get secret -n kube-system</span><br><span class="line">NAME TYPE DATA AGE</span><br><span class="line">attachdetach-controller-token-j5vtc kubernetes.io/service-account-token 3 10d</span><br><span class="line">bootstrap-signer-token-prjr2 kubernetes.io/service-account-token 3 10d</span><br><span class="line">certificate-controller-token-f8rjx kubernetes.io/service-account-token 3 10d</span><br><span class="line">clusterrole-aggregation-controller-token-l6lqh kubernetes.io/service-account-token 3 10d</span><br><span class="line">coredns-token-p5z2z kubernetes.io/service-account-token 3 10d</span><br><span class="line">cronjob-controller-token-jsp8k kubernetes.io/service-account-token 3 10d</span><br><span class="line">daemon-set-controller-token-4fh89 kubernetes.io/service-account-token 3 10d</span><br><span class="line">dashboard-admin-token-dl8pf kubernetes.io/service-account-token 3 9m2s</span><br><span class="line">default-token-22jpc kubernetes.io/service-account-token 3 10d</span><br><span class="line">deployment-controller-token-jc4xc kubernetes.io/service-account-token 3 10d</span><br><span class="line">disruption-controller-token-p85cv kubernetes.io/service-account-token 3 10d</span><br><span class="line">endpoint-controller-token-dhk4f kubernetes.io/service-account-token 3 10d</span><br><span class="line">expand-controller-token-lbsrj kubernetes.io/service-account-token 3 10d</span><br><span class="line">flannel-token-qjgks kubernetes.io/service-account-token 3 10d</span><br><span class="line">generic-garbage-collector-token-6fwmg kubernetes.io/service-account-token 3 10d</span><br><span class="line">horizontal-pod-autoscaler-token-vl8dh kubernetes.io/service-account-token 3 10d</span><br><span class="line">job-controller-token-c2sfm kubernetes.io/service-account-token 3 10d</span><br><span class="line">kube-proxy-token-qg465 kubernetes.io/service-account-token 3 10d</span><br><span class="line">kubernetes-dashboard-certs NodePort 0 56m</span><br><span class="line">kubernetes-dashboard-key-holder Opaque 2 56m</span><br><span class="line">kubernetes-dashboard-token-hpg2q kubernetes.io/service-account-token 3 56m</span><br><span class="line">namespace-controller-token-vvbxk kubernetes.io/service-account-token 3 10d</span><br><span class="line">node-controller-token-5hmv6 kubernetes.io/service-account-token 3 10d</span><br><span class="line">persistent-volume-binder-token-6vrk6 kubernetes.io/service-account-token 3 10d</span><br><span class="line">pod-garbage-collector-token-f8bvl kubernetes.io/service-account-token 3 10d</span><br><span class="line">pv-protection-controller-token-pp8bh kubernetes.io/service-account-token 3 10d</span><br><span class="line">pvc-protection-controller-token-jf6lj kubernetes.io/service-account-token 3 10d</span><br><span class="line">replicaset-controller-token-twbw8 kubernetes.io/service-account-token 3 10d</span><br><span class="line">replication-controller-token-lr45r kubernetes.io/service-account-token 3 10d</span><br><span class="line">resourcequota-controller-token-qlgbb kubernetes.io/service-account-token 3 10d</span><br><span class="line">service-account-controller-token-bsqlq kubernetes.io/service-account-token 3 10d</span><br><span class="line">service-controller-token-g6lvs kubernetes.io/service-account-token 3 10d</span><br><span class="line">statefulset-controller-token-h6wrx kubernetes.io/service-account-token 3 10d</span><br><span class="line">token-cleaner-token-wvwbn kubernetes.io/service-account-token 3 10d</span><br><span class="line">ttl-controller-token-z2fm7 kubernetes.io/service-account-token 3 10d</span><br><span class="line">[root@k8s-master01 ~]# kubectl describe secret dashboard-admin-token-dl8pf -n kube-system</span><br><span class="line">Name: dashboard-admin-token-dl8pf</span><br><span class="line">Namespace: kube-system</span><br><span class="line">Labels: <none></span><br><span class="line">Annotations: kubernetes.io/service-account.name: dashboard-admin</span><br><span class="line"> kubernetes.io/service-account.uid: b4fc67f6-1cab-4486-8652-05346c939c6d</span><br><span class="line"></span><br><span class="line">Type: kubernetes.io/service-account-token</span><br><span class="line"></span><br><span class="line">Data</span><br><span class="line">====</span><br><span class="line">ca.crt: 1025 bytes</span><br><span class="line">namespace: 11 bytes</span><br><span class="line">token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZGw4cGYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYjRmYzY3ZjYtMWNhYi00NDg2LTg2NTItMDUzNDZjOTM5YzZkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.ArAKoKEiZ0xaV9rqff63iq2t6iAsWBmA-VhHKK_pnkiMObpPL-JjZras40HO0crE7Gnou9dUWCStW3AbfmtJ1SX_Hmo4OlXGH2xFBJ-_2wruwWOU89dlHhOnhw8__skhsVrE92-KDK00GRSrA4BkUu8PWp45jCQyIwFbF8h3L2ydcNlcs_rxGieVFRc1p9gaf_HAyXIIHEgu-M5LxA6BduN-3Z7WBzYMokFd_r_c_beAQ4CUlTYc1c0FjmqLeyZpyLJL6IMqztjaYHFXiRty6c-PQHZd6HQoElJShbw1lhZtHXSSw0A70Kb3ZVfqQZxRaOsqJYo70sZXQQRaYso6fg</span><br><span class="line">[root@k8s-master01 ~]#</span><br></pre></td></tr></table></figure></p><p>输入Token<br><img src="https://img-blog.csdnimg.cn/20200413192952809.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>部署成功!</p>]]></content>
<summary type="html">
<p>上节讲解了通过kubeadm 搭建集群kubeadm1.15.1环境,现在的集群已经搭建成功了,今天给大家展示Kubernetes Dashboard 插件的安装</p>
<h2 id="下载官方的yaml文件"><a href="#下载官方的yaml文件" class="
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>解决Kubernetes1.5.1 coredns报错CrashLoopBackOff</title>
<link href="https://plutoacharon.github.io/2020/04/21/%E8%A7%A3%E5%86%B3Kubernetes1-5-1-coredns%E6%8A%A5%E9%94%99CrashLoopBackOff/"/>
<id>https://plutoacharon.github.io/2020/04/21/解决Kubernetes1-5-1-coredns报错CrashLoopBackOff/</id>
<published>2020-04-21T09:49:27.000Z</published>
<updated>2020-04-21T09:49:45.084Z</updated>
<content type="html"><![CDATA[<p>今天在使用K8s查看pod时发现,<code>coredns</code>出现了<code>CrashLoopBackOff</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl get pod -n kube-system</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">coredns-5c98db65d4-f9rb7 0/1 CrashLoopBackOff 50 9d</span><br><span class="line">coredns-5c98db65d4-xcd9s 0/1 CrashLoopBackOff 50 9d</span><br><span class="line">etcd-k8s-master01 1/1 Running 2 9d</span><br><span class="line">kube-apiserver-k8s-master01 1/1 Running 2 9d</span><br><span class="line">kube-controller-manager-k8s-master01 1/1 Running 3 9d</span><br><span class="line">kube-flannel-ds-amd64-6h79p 1/1 Running 2 9d</span><br><span class="line">kube-flannel-ds-amd64-bnvtd 1/1 Running 3 9d</span><br><span class="line">kube-flannel-ds-amd64-bsq4j 1/1 Running 2 9d</span><br><span class="line">kube-proxy-5fn9m 1/1 Running 1 9d</span><br><span class="line">kube-proxy-6hjvp 1/1 Running 2 9d</span><br><span class="line">kube-proxy-t47n9 1/1 Running 2 9d</span><br><span class="line">kube-scheduler-k8s-master01 1/1 Running 4 9d</span><br></pre></td></tr></table></figure></p><p>使用<code>kubectl logs</code>命令查看, 报错很奇怪<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl logs coredns-5c98db65d4-xcd9s -n kube-system</span></span><br><span class="line">E0413 06:32:09.919666 1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?<span class="built_in">limit</span>=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host</span><br><span class="line">E0413 06:32:09.919666 1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?<span class="built_in">limit</span>=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host</span><br></pre></td></tr></table></figure></p><h2 id="原因"><a href="#原因" class="headerlink" title="原因:"></a>原因:</h2><p>查阅k8s官方文档<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">coredns pods 有 CrashLoopBackOff 或者 Error 状态</span><br><span class="line">如果有些节点运行的是旧版本的 Docker,同时启用了 SELinux,您或许会遇到 coredns pods 无法启动的情况。 要解决此问题,您可以尝试以下选项之一:</span><br><span class="line"></span><br><span class="line">升级到 Docker 的较新版本。</span><br><span class="line"></span><br><span class="line">禁用 SELinux.</span><br><span class="line"></span><br><span class="line">修改 coredns 部署以设置 allowPrivilegeEscalation 为 true:</span><br><span class="line"></span><br><span class="line">kubectl -n kube-system get deployment coredns -o yaml | \</span><br><span class="line">sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \</span><br><span class="line">kubectl apply -f -</span><br><span class="line">CoreDNS 处于 CrashLoopBackOff 时的另一个原因是当 Kubernetes 中部署的 CoreDNS Pod 检测 到环路时。有许多解决方法 可以避免在每次 CoreDNS 监测到循环并退出时,Kubernetes 尝试重启 CoreDNS Pod 的情况。</span><br><span class="line"></span><br><span class="line">警告:</span><br><span class="line">警告:禁用 SELinux 或设置 allowPrivilegeEscalation 为 true 可能会损害集群的安全性。</span><br></pre></td></tr></table></figure></p><p>我这里的原因可能是以前配置<code>iptables</code>时产生的</p><h2 id="解决"><a href="#解决" class="headerlink" title="解决"></a>解决</h2><ol><li>设置iptables为空规则<br><code>iptables -F && service iptables save</code></li><li>删除报错的coredns pod<figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl delete pod coredns-5c98db65d4-xcd9s</span></span><br><span class="line">Error from server (NotFound): pods <span class="string">"coredns-5c98db65d4-xcd9s"</span> not found</span><br><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl delete pod coredns-5c98db65d4-xcd9s -n kube-system</span></span><br><span class="line">pod <span class="string">"coredns-5c98db65d4-xcd9s"</span> deleted</span><br><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl delete pod coredns-5c98db65d4-f9rb7 -n kube-system</span></span><br><span class="line">pod <span class="string">"coredns-5c98db65d4-f9rb7"</span> deleted</span><br></pre></td></tr></table></figure></li></ol><p>重新查看pod<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 flannel]<span class="comment"># kubectl get pod -n kube-system</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">coredns-5c98db65d4-54j5c 1/1 Running 0 13m</span><br><span class="line">coredns-5c98db65d4-jmvbf 1/1 Running 0 14m</span><br><span class="line">etcd-k8s-master01 1/1 Running 2 9d</span><br><span class="line">kube-apiserver-k8s-master01 1/1 Running 2 9d</span><br><span class="line">kube-controller-manager-k8s-master01 1/1 Running 3 9d</span><br><span class="line">kube-flannel-ds-amd64-6h79p 1/1 Running 2 9d</span><br><span class="line">kube-flannel-ds-amd64-bnvtd 1/1 Running 3 9d</span><br><span class="line">kube-flannel-ds-amd64-bsq4j 1/1 Running 2 9d</span><br><span class="line">kube-proxy-5fn9m 1/1 Running 1 9d</span><br><span class="line">kube-proxy-6hjvp 1/1 Running 2 9d</span><br><span class="line">kube-proxy-t47n9 1/1 Running 2 9d</span><br><span class="line">kube-scheduler-k8s-master01 1/1 Running 4 9d</span><br><span class="line">[root@k8s-master01 flannel]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>状态重新变成<code>Running</code></p>]]></content>
<summary type="html">
<p>今天在使用K8s查看pod时发现,<code>coredns</code>出现了<code>CrashLoopBackOff</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pr
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>Kubernetes(K8s)入门到实践(四)----Kubernetes1.15.1配置私有仓库Harbor</title>
<link href="https://plutoacharon.github.io/2020/04/21/Kubernetes-K8s-%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E8%B7%B5-%E5%9B%9B-Kubernetes1-15-1%E9%85%8D%E7%BD%AE%E7%A7%81%E6%9C%89%E4%BB%93%E5%BA%93Harbor/"/>
<id>https://plutoacharon.github.io/2020/04/21/Kubernetes-K8s-入门到实践-四-Kubernetes1-15-1配置私有仓库Harbor/</id>
<published>2020-04-21T09:48:19.000Z</published>
<updated>2020-04-21T09:48:58.957Z</updated>
<content type="html"><![CDATA[<h1 id="目录"><a href="#目录" class="headerlink" title="目录"></a>目录</h1><p><a href="https://blog.csdn.net/qq_43442524/article/details/104483555" target="_blank" rel="noopener">Kubernetes(K8s)入门到实践(一)—-Kubernetes入门</a><br><a href="https://blog.csdn.net/qq_43442524/article/details/104496523" target="_blank" rel="noopener">Kubernetes(K8s)入门到实践(二)—-Kubernetes的基本概念和术语</a><br><a href="https://blog.csdn.net/qq_43442524/article/details/105293018" target="_blank" rel="noopener">Kubernetes(K8s)入门到实践(三)—-Kubernetes Centos7集群安装</a><br><a href="https://blog.csdn.net/qq_43442524/article/details/105429614" target="_blank" rel="noopener">Kubernetes(K8s)入门到实践(四)—-Kubernetes1.15.1配置私有仓库Harbor</a></p><h2 id="前期准备"><a href="#前期准备" class="headerlink" title="前期准备"></a>前期准备</h2><ul><li>需要三台K8s节点</li><li>Harbor虚拟机</li><li>docker-compose</li><li>harbor安装包</li></ul><h2 id="安装docker"><a href="#安装docker" class="headerlink" title="安装docker"></a>安装docker</h2><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">yum install -y yum-utils device-mapper-persistent-data lvm2</span><br><span class="line"></span><br><span class="line">yum-config-manager \</span><br><span class="line">--add-repo \ </span><br><span class="line">http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo</span><br><span class="line"></span><br><span class="line">yum update -y && yum install -y docker-ce</span><br></pre></td></tr></table></figure><p>安装完成后需要建立<code>/etc/docker/daemon.json</code>文件<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 启动docker</span></span><br><span class="line">systemctl start docker && systemctl <span class="built_in">enable</span> docker </span><br><span class="line"><span class="comment">## 创建 /etc/docker 目录</span></span><br><span class="line">mkdir /etc/docker</span><br><span class="line"><span class="comment"># 配置 daemon.json</span></span><br><span class="line">vim /etc/docker/daemon.json</span><br><span class="line">{</span><br><span class="line"> <span class="string">"exec-opts"</span>: [<span class="string">"native.cgroupdriver=systemd"</span>],</span><br><span class="line"> <span class="string">"log-driver"</span>: <span class="string">"json-file"</span>,</span><br><span class="line"> <span class="string">"log-opts"</span>: {</span><br><span class="line"><span class="string">"max-size"</span>: <span class="string">"100m"</span></span><br><span class="line"> },</span><br><span class="line"> <span class="string">"insecure-registries"</span>: [<span class="string">"https://hub.test.com"</span>]</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">mkdir -p /etc/systemd/system/docker.service.d</span><br><span class="line"><span class="comment"># 重启docker服务</span></span><br><span class="line">systemctl daemon-reload && systemctl restart docker && systemctl <span class="built_in">enable</span> docker</span><br></pre></td></tr></table></figure></p><p>同理: K8s节点也需要一样修改<code>/etc/docker/daemon.json</code>文件</p><h2 id="安装Harbor"><a href="#安装Harbor" class="headerlink" title="安装Harbor"></a>安装Harbor</h2><h3 id="下载docker-compose"><a href="#下载docker-compose" class="headerlink" title="下载docker-compose"></a>下载docker-compose</h3><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">curl -L https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m`> ./docker-compose</span><br></pre></td></tr></table></figure><h3 id="下载解压Harbor"><a href="#下载解压Harbor" class="headerlink" title="下载解压Harbor"></a>下载解压Harbor</h3><p>Harbor 官方地址:<code>https://github.com/vmware/harbor/releases</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># chmod a+x docker-compose </span></span><br><span class="line">[root@localhost ~]<span class="comment"># mv docker-compose /usr/local/bin/</span></span><br><span class="line">[root@localhost ~]<span class="comment"># tar -zxvf harbor-offline-installer-v1.2.0.tgz </span></span><br><span class="line">[root@localhost ~]<span class="comment"># mv harbor /usr/local/</span></span><br><span class="line">[root@localhost ~]<span class="comment"># cd /usr/local/harbor/</span></span><br></pre></td></tr></table></figure></p><h3 id="配置harbor-cfg"><a href="#配置harbor-cfg" class="headerlink" title="配置harbor.cfg"></a>配置harbor.cfg</h3><p>修改为https协议,并且定义网址<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">hostname = hub.test.com</span><br><span class="line">ui_url_protocol = https</span><br></pre></td></tr></table></figure></p><p>以下为ssl证书配置文件目录 接下来配置HTTPS证书<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">ssl_cert = /data/cert/server.crt</span><br><span class="line">ssl_cert_key = /data/cert/server.key</span><br><span class="line"></span><br><span class="line">#The path of secretkey storage</span><br><span class="line">secretkey_path = /data</span><br></pre></td></tr></table></figure></p><h3 id="创建https证书以及配置相关目录权限"><a href="#创建https证书以及配置相关目录权限" class="headerlink" title="创建https证书以及配置相关目录权限"></a>创建https证书以及配置相关目录权限</h3><p>创建cert目录,输入密码例如<code>123456</code>下面配置会用到<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost harbor]<span class="comment"># mkdir -p /data/cert</span></span><br><span class="line">[root@localhost harbor]<span class="comment"># cd /data/cert/</span></span><br><span class="line">[root@localhost cert]<span class="comment"># openssl genrsa -des3 -out server.key 2048</span></span><br><span class="line">Generating RSA private key, 2048 bit long modulus</span><br><span class="line">...................................+++</span><br><span class="line">................+++</span><br><span class="line">e is 65537 (0x10001)</span><br><span class="line">Enter pass phrase <span class="keyword">for</span> server.key:</span><br><span class="line">Verifying - Enter pass phrase <span class="keyword">for</span> server.key:</span><br></pre></td></tr></table></figure></p><p>生成服务器CSR证书请求文件,注意站点名称要一致</p><p>输入刚才设置的密码进行配置</p><blockquote><p>Common Name (eg, your name or your server’s hostname) []:<code>hub.test.com</code> 一定要填上面配置的网址<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost cert]<span class="comment"># openssl req -new -key server.key -out server.csr</span></span><br><span class="line">Enter pass phrase <span class="keyword">for</span> server.key:</span><br><span class="line">You are about to be asked to enter information that will be incorporated</span><br><span class="line">into your certificate request.</span><br><span class="line">What you are about to enter is what is called a Distinguished Name or a DN.</span><br><span class="line">There are quite a few fields but you can leave some blank</span><br><span class="line">For some fields there will be a default value,</span><br><span class="line">If you enter <span class="string">'.'</span>, the field will be left blank.</span><br><span class="line">-----</span><br><span class="line">Country Name (2 letter code) [XX]:CN </span><br><span class="line">State or Province Name (full name) []:Hebei</span><br><span class="line">Locality Name (eg, city) [Default City]:sjz</span><br><span class="line">Organization Name (eg, company) [Default Company Ltd]:<span class="built_in">test</span></span><br><span class="line">Organizational Unit Name (eg, section) []:<span class="built_in">test</span></span><br><span class="line">Common Name (eg, your name or your server<span class="string">'s hostname) []:hub.test.com</span></span><br><span class="line"><span class="string">Email Address []:test@qq.com </span></span><br><span class="line"><span class="string"></span></span><br><span class="line"><span class="string">Please enter the following '</span>extra<span class="string">' attributes</span></span><br><span class="line"><span class="string">to be sent with your certificate request</span></span><br><span class="line"><span class="string">A challenge password []:</span></span><br><span class="line"><span class="string">An optional company name []:</span></span><br></pre></td></tr></table></figure></p></blockquote><p>生成服务器认证证书<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost cert]<span class="comment"># cp server.key server.key.org</span></span><br><span class="line">[root@localhost cert]<span class="comment"># openssl rsa -in server.key.org -out server.key</span></span><br><span class="line">Enter pass phrase <span class="keyword">for</span> server.key.org:</span><br><span class="line">writing RSA key</span><br><span class="line">[root@localhost cert]<span class="comment"># openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt</span></span><br><span class="line">Signature ok</span><br><span class="line">subject=/C=CN/ST=Hebei/L=sjz/O=<span class="built_in">test</span>/OU=<span class="built_in">test</span>/CN=hub.test.com/emailAddress=<span class="built_in">test</span>@qq.com</span><br><span class="line">Getting Private key</span><br><span class="line">[root@localhost cert]<span class="comment"># ls</span></span><br><span class="line">server.crt server.csr server.key server.key.org</span><br><span class="line">[root@localhost cert]<span class="comment"># chmod a+x *</span></span><br><span class="line">[root@localhost cert]<span class="comment"># cd -</span></span><br><span class="line">/usr/<span class="built_in">local</span>/harbor</span><br></pre></td></tr></table></figure></p><p>安装<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost harbor]<span class="comment"># ./install.sh </span></span><br><span class="line">[root@localhost harbor]<span class="comment"># docker ps -a</span></span><br><span class="line">CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES</span><br><span class="line">c998c35434cd vmware/nginx-photon:1.11.13 <span class="string">"nginx -g 'daemon of…"</span> 2 hours ago Up 2 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp nginx</span><br><span class="line">b8651abbdc0f vmware/harbor-jobservice:v1.2.0 <span class="string">"/harbor/harbor_jobs…"</span> 2 hours ago Up 2 hours harbor-jobservice</span><br><span class="line">38cd42c3ad61 vmware/harbor-ui:v1.2.0 <span class="string">"/harbor/harbor_ui"</span> 2 hours ago Up 2 hours harbor-ui</span><br><span class="line">7117305239e4 vmware/harbor-adminserver:v1.2.0 <span class="string">"/harbor/harbor_admi…"</span> 2 hours ago Up 2 hours harbor-adminserver</span><br><span class="line">547244f64e7b vmware/harbor-db:v1.2.0 <span class="string">"docker-entrypoint.s…"</span> 2 hours ago Up 2 hours 3306/tcp harbor-db</span><br><span class="line">08ac3fe587c8 vmware/registry:2.6.2-photon <span class="string">"/entrypoint.sh serv…"</span> 2 hours ago Up 2 hours 5000/tcp registry</span><br><span class="line">a137bc1e2548 vmware/harbor-log:v1.2.0 <span class="string">"/bin/sh -c 'crond &…"</span> 2 hours ago Up 2 hours 127.0.0.1:1514->514/tcp harbor-log</span><br><span class="line">[root@localhost harbor]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h3 id="修改hosts文件映射"><a href="#修改hosts文件映射" class="headerlink" title="修改hosts文件映射"></a>修改hosts文件映射</h3><p>修改k8s节点与Harbor虚拟机<code>/etc/hosts</code>文件<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">192.168.0.50 k8s-master01</span><br><span class="line">192.168.0.51 k8s-node01</span><br><span class="line">192.168.0.52 k8s-node02</span><br><span class="line">192.168.0.44 hub.test.com</span><br></pre></td></tr></table></figure></p><p>本地hosts文件添加<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">192.168.0.44 hub.test.com</span><br></pre></td></tr></table></figure></p><p>登录账号<code>admin</code>,密码<code>Harbor12345</code><br><img src="https://img-blog.csdnimg.cn/20200410115107334.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200410134144950.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h2 id="Harbor上传镜像"><a href="#Harbor上传镜像" class="headerlink" title="Harbor上传镜像"></a>Harbor上传镜像</h2><h3 id="拉取镜像"><a href="#拉取镜像" class="headerlink" title="拉取镜像"></a>拉取镜像</h3><p>这是是从我的docker hub中拉取的镜像<code>plutoacharon/myapp:v1</code>,也可以从docker hub中搜索拉取想要上传的镜像<br><code>docker pull plutoacharon/myapp:v1</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker pull plutoacharon/myapp:v1</span></span><br><span class="line">v1: Pulling from plutoacharon/myapp</span><br><span class="line">550fe1bea624: Pull complete </span><br><span class="line">af3988949040: Pull complete </span><br><span class="line">d6642feac728: Pull complete </span><br><span class="line">c20f0a205eaa: Pull complete </span><br><span class="line">fe78b5db7c4e: Pull complete </span><br><span class="line">6565e38e67fe: Pull complete </span><br><span class="line">Digest: sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513</span><br><span class="line">Status: Downloaded newer image <span class="keyword">for</span> plutoacharon/myapp:v1</span><br><span class="line">docker.io/plutoacharon/myapp:v1</span><br><span class="line">[root@localhost ~]<span class="comment"># docker images</span></span><br><span class="line">REPOSITORY TAG IMAGE ID CREATED SIZE</span><br><span class="line">plutoacharon/myapp v1 d4a5e0eaa84f 2 years ago 15.5MB</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h3 id="上传镜像"><a href="#上传镜像" class="headerlink" title="上传镜像"></a>上传镜像</h3><p>首先使用<code>docker login https://hub.test.com</code>登录才可以上传到Harbor中<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker login https://hub.test.com</span></span><br><span class="line">Username: admin</span><br><span class="line">Password: </span><br><span class="line">WARNING! Your password will be stored unencrypted <span class="keyword">in</span> /root/.docker/config.json.</span><br><span class="line">Configure a credential helper to remove this warning. See</span><br><span class="line">https://docs.docker.com/engine/reference/commandline/login/<span class="comment">#credentials-store</span></span><br><span class="line"></span><br><span class="line">Login Succeeded</span><br><span class="line">[root@localhost ~]<span class="comment"># docker tag plutoacharon/myapp:v1 hub.test.com/library/myapp:v1</span></span><br><span class="line">[root@localhost ~]<span class="comment"># docker push hub.test.com/library/myapp:v1</span></span><br><span class="line">The push refers to repository [hub.test.com/library/myapp]</span><br><span class="line">a0d2c4392b06: Pushed </span><br><span class="line">05a9e65e2d53: Pushed </span><br><span class="line">68695a6cfd7d: Pushed </span><br><span class="line">c1dc81a64903: Pushed </span><br><span class="line">8460a579ab63: Pushed </span><br><span class="line">d39d92664027: Pushed </span><br><span class="line">v1: digest: sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e size: 1569</span><br></pre></td></tr></table></figure></p><p><img src="https://img-blog.csdnimg.cn/20200410142631549.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h2 id="Kubernetes拉取运行Harbor镜像"><a href="#Kubernetes拉取运行Harbor镜像" class="headerlink" title="Kubernetes拉取运行Harbor镜像"></a>Kubernetes拉取运行Harbor镜像</h2><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl run nginx-deployment --image=hub.test.com/library/myapp:v1 --port=80 --replicas=1</span></span><br><span class="line">kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed <span class="keyword">in</span> a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.</span><br><span class="line">deployment.apps/nginx-deployment created</span><br><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get deployment</span></span><br><span class="line">NAME READY UP-TO-DATE AVAILABLE AGE</span><br><span class="line">nginx-deployment 1/1 1 1 25s</span><br><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get rs</span></span><br><span class="line">NAME DESIRED CURRENT READY AGE</span><br><span class="line">nginx-deployment-bdf84f685 1 1 1 39s</span><br><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get pod</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE</span><br><span class="line">nginx-deployment-bdf84f685-pg7qk 1/1 Running 0 50s</span><br><span class="line">[root@k8s-master01 ~]<span class="comment"># kubectl get pod -o wide</span></span><br><span class="line">NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES</span><br><span class="line">nginx-deployment-bdf84f685-pg7qk 1/1 Running 0 65s 10.244.1.2 k8s-node01 <none> <none></span><br></pre></td></tr></table></figure><p><code>kubectl get pod -o wide</code>可以看到nginx-deployment在node1上运行<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node01 ~]<span class="comment"># docker ps | grep nginx</span></span><br><span class="line">066e82c78200 hub.test.com/library/myapp <span class="string">"nginx -g 'daemon of…"</span> 20 minutes ago Up 20 minutes k8s_nginx-deployment_nginx-deployment-bdf84f685-pg7qk_default_11af7460-37a5-4d61-b94c-5c64684110ed_0</span><br><span class="line">3a0c5624068c k8s.gcr.io/pause:3.1 <span class="string">"/pause"</span> 20 minutes ago Up 20 minutes k8s_POD_nginx-deployment-bdf84f685-pg7qk_default_11af7460-37a5-4d61-b94c-5c64684110ed_0</span><br><span class="line">[root@k8s-node01 ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@k8s-node01 ~]<span class="comment"># curl 10.244.1.2</span></span><br><span class="line">Hello MyApp | Version: v1 | <a href=<span class="string">"hostname.html"</span>>Pod Name</a></span><br><span class="line">[root@k8s-node01 ~]<span class="comment"># curl 10.244.1.2/hostname.html</span></span><br><span class="line">nginx-deployment-bdf84f685-pg7qk</span><br><span class="line">[root@k8s-node01 ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
<h1 id="目录"><a href="#目录" class="headerlink" title="目录"></a>目录</h1><p><a href="https://blog.csdn.net/qq_43442524/article/details/104483555"
</summary>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/categories/Kubernetes/"/>
<category term="Kubernetes" scheme="https://plutoacharon.github.io/tags/Kubernetes/"/>
</entry>
<entry>
<title>HA高可用与负载均衡入门到实战(五)---- 配置nginx反向代理和负载均衡</title>
<link href="https://plutoacharon.github.io/2020/04/09/HA%E9%AB%98%E5%8F%AF%E7%94%A8%E4%B8%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E6%88%98%EF%BC%88%E4%BA%94%EF%BC%89-%E9%85%8D%E7%BD%AEnginx%E5%8F%8D%E5%90%91%E4%BB%A3%E7%90%86%E5%92%8C%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<id>https://plutoacharon.github.io/2020/04/09/HA高可用与负载均衡入门到实战(五)-配置nginx反向代理和负载均衡/</id>
<published>2020-04-09T12:32:14.000Z</published>
<updated>2020-04-09T12:32:26.531Z</updated>
<content type="html"><![CDATA[<h2 id="网站架构"><a href="#网站架构" class="headerlink" title="网站架构"></a>网站架构</h2><p>基于Docker容器里构建高并发网站</p><p>拓扑图:<br><img src="https://img-blog.csdnimg.cn/20200409155415760.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70#pic_center" alt="在这里插入图片描述"></p><h3 id="正向代理"><a href="#正向代理" class="headerlink" title="正向代理"></a>正向代理</h3><ul><li>代理:也被叫做正向代理,是一个位于客户端和目标服务器之间的代理服务器</li><li>作用:客户端将发送的请求和指定的目标服务器提交给代理服务器,然后代理服务器向目标服务器发起请求,并将获得的响应结果返回给客户端的过程<br><img src="https://img-blog.csdnimg.cn/20200409170657710.png" alt="在这里插入图片描述"></li></ul><h3 id="反向代理"><a href="#反向代理" class="headerlink" title="反向代理"></a>反向代理</h3><ul><li>反向代理:对于客户端而言就是目标服务器</li><li>作用:客户端向反向代理服务器发送请求后,反向代理服务器将该请求转发给内部网络上的后端服务器,并将从后端服务器上得到的响应结果返回给客户端<br><img src="https://img-blog.csdnimg.cn/20200409170738230.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><h4 id="反向代理服务配置"><a href="#反向代理服务配置" class="headerlink" title="反向代理服务配置"></a>反向代理服务配置</h4></li><li>反向代理的配置指令:proxy_pass,用于设置后端服务器的地址。该地址中包括传输数据使用的协议、服务器主机名以及可选的URI资源等</li><li><p>作用范围:通常在location块中进行设置</p><h3 id="负载均衡"><a href="#负载均衡" class="headerlink" title="负载均衡"></a>负载均衡</h3></li><li><p>指令:upstream指令可以实现负载均衡,在该指令中能够配置负载服务器组</p></li><li>配置方式:目前负载均衡有4种典型的配置方式</li></ul><table><thead><tr><th>配置方式</th><th>说明</th></tr></thead><tbody><tr><td>轮询方式</td><td>负载均衡默认设置方式,每个请求按照时间顺序逐一分配到不同的后端服务器进行处理,如果有服务器宕机,会自动剔除</td></tr><tr><td>权重方式</td><td>利用weight指定轮询的权重比率,与访问率成正比,用于后端服务器性能不均的情况</td></tr><tr><td>ip_hash方式</td><td>每个请求按访问IP的hash结果分配,这样可以使每个访客固定访问一个后端服务器,可以解决Session共享的问题</td></tr><tr><td>第三方模块</td><td>采用fair时,按照每台服务器的响应时间来分配请求,响应时间短的优先分配;若第三方模块采用url_hash时,按照访问url的hash值来分配请求</td></tr></tbody></table><h2 id="配置nginx反向代理,使用nginx1、APP1、APP2三个容器"><a href="#配置nginx反向代理,使用nginx1、APP1、APP2三个容器" class="headerlink" title="配置nginx反向代理,使用nginx1、APP1、APP2三个容器"></a>配置nginx反向代理,使用nginx1、APP1、APP2三个容器</h2><h3 id="使用php-apache镜像启动APP1和APP2两个容器"><a href="#使用php-apache镜像启动APP1和APP2两个容器" class="headerlink" title="使用php-apache镜像启动APP1和APP2两个容器"></a>使用php-apache镜像启动APP1和APP2两个容器</h3><p>1) docker network create –subnet=172.18.0.0/16 cluster //创建docker网络<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker network create --subnet=172.18.0.0/16 cluster</span></span><br><span class="line">93cf616f5b6466f3872a697e7246d525173405659d659f775584460cc523fc19</span><br><span class="line">[root@localhost ~]<span class="comment"># docker network ls</span></span><br><span class="line">NETWORK ID NAME DRIVER SCOPE</span><br><span class="line">5b668484dc8f bridge bridge <span class="built_in">local</span></span><br><span class="line">93cf616f5b64 cluster bridge <span class="built_in">local</span></span><br><span class="line">f2010c589fe5 host host <span class="built_in">local</span></span><br><span class="line">3e84fc461677 none null <span class="built_in">local</span></span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>2) 启动容器APP1,设定地址为172.18.0.111, 启动容器APP2,设定地址为172.18.0.112</p><p><code>docker run -d --privileged --net cluster --ip 172.18.0.111 --name APP1 php-apache /usr/sbin/init</code><br><code>docker run -d --privileged --net cluster --ip 172.18.0.112 --name APP2 php-apache /usr/sbin/init</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker run -d --privileged --net cluster --ip 172.18.0.111 --name APP1 php-apache /usr/sbin/init </span></span><br><span class="line">0119783e023dbd322e6598c4556743408fb2fda176b26406b8c80d3d982bf02e</span><br><span class="line">[root@localhost ~]<span class="comment"># docker run -d --privileged --net cluster --ip 172.18.0.112 --name APP2 php-apache /usr/sbin/init </span></span><br><span class="line">f2744c76c1759187788620e84705a0905b1021da4d987620b96cc0f3b4d2eac8</span><br><span class="line">[root@localhost ~]<span class="comment"># docker ps</span></span><br><span class="line">CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES</span><br><span class="line">f2744c76c175 php-apache <span class="string">"/usr/sbin/init"</span> 4 seconds ago Up 2 seconds APP2</span><br><span class="line">0119783e023d php-apache <span class="string">"/usr/sbin/init"</span> 20 seconds ago Up 18 seconds APP1</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>3) 配置容器APP1,编辑首页内容为“site1”<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker exec -it f27 /bin/bash</span></span><br><span class="line">[root@f2744c76c175 /]<span class="comment"># vim /var/www/html/index.html</span></span><br><span class="line">[root@f2744c76c175 /]<span class="comment"># systemctl status httpd</span></span><br><span class="line">● httpd.service - The Apache HTTP Server</span><br><span class="line"> Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)</span><br><span class="line"> Drop-In: /usr/lib/systemd/system/httpd.service.d</span><br><span class="line"> └─php-fpm.conf</span><br><span class="line"> Active: inactive (dead)</span><br><span class="line"> Docs: man:httpd.service(8)</span><br><span class="line">[root@f2744c76c175 /]<span class="comment"># systemctl start httpd</span></span><br></pre></td></tr></table></figure></p><p>4) 配置容器APP1,编辑首页内容为“site2”<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker exec -it 011 /bin/bash</span></span><br><span class="line">[root@0119783e023d /]<span class="comment"># vim /var/www/html/index.html</span></span><br><span class="line">[root@0119783e023d /]<span class="comment"># systemctl start httpd</span></span><br><span class="line">[root@0119783e023d /]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>5)在宿主机访问<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.111</span></span><br><span class="line">This is site1!</span><br><span class="line">[root@localhost ~]<span class="comment"># curl 172.18.0.112</span></span><br><span class="line">This is site2!</span><br><span class="line">[root@localhost ~]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><h3 id="使用nginx镜像启动nginx1容器,配置反向代理"><a href="#使用nginx镜像启动nginx1容器,配置反向代理" class="headerlink" title="使用nginx镜像启动nginx1容器,配置反向代理"></a>使用nginx镜像启动nginx1容器,配置反向代理</h3><p>1) 启动容器nginx1,设定地址为172.18.0.11<br><code>docker run -d --privileged --net cluster --ip 172.18.0.11 -p 80:80 --name nginx1 nginx /usr/sbin/init</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@localhost ~]<span class="comment"># docker run -d --privileged --net cluster --ip 172.18.0.11 -p 80:80 --name nginx1 nginx /usr/sbin/init</span></span><br><span class="line">b0db3efdfe817b3df2557ef598e6bf709a5cabcfe2122d40caf344ee96075aac</span><br><span class="line">[root@localhost ~]<span class="comment"># docker exec -it b0d /bin/bash</span></span><br><span class="line">[root@b0db3efdfe81 /]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>2) 在容器nginx1编辑/etc/nginx/nginx.conf文件,重新启动nginx服务</p><p>配置两台虚拟主机<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name site1.test.com;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://172.18.0.111;</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name site2.test.com;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://172.18.0.112;</span><br><span class="line"> }</span><br></pre></td></tr></table></figure></p><p>3) }在主机编辑hosts文件<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">宿主机的IP地址 site1.test.com</span><br><span class="line">宿主机的IP地址 site2.test.com</span><br><span class="line">宿主机的IP地址 www.test.com</span><br></pre></td></tr></table></figure></p><p>4) 在主机使用浏览器访问site1.test.com<br><img src="https://img-blog.csdnimg.cn/20200409164810311.png" alt="在这里插入图片描述"><br>5) 在主机使用浏览器访问site2.test.com<br><img src="https://img-blog.csdnimg.cn/20200409164752131.png" alt="在这里插入图片描述"></p><h4 id="配置nginx负载均衡,使用nginx1、APP1、APP2三个容器"><a href="#配置nginx负载均衡,使用nginx1、APP1、APP2三个容器" class="headerlink" title="配置nginx负载均衡,使用nginx1、APP1、APP2三个容器"></a>配置nginx负载均衡,使用nginx1、APP1、APP2三个容器</h4><p><strong>保持以上三个容器不变</strong> </p><p>使用nginx1容器,配置<code>nginx一般轮询负载均衡</code></p><p>1) 在容器nginx1编辑/etc/nginx/nginx.conf文件,重新启动nginx服务</p><p>配置 <a href="http://www.test.com虚拟主机" target="_blank" rel="noopener">www.test.com虚拟主机</a><br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name www.test.com;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://APP;</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>配置负载均衡服务器组<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">upstream APP {</span><br><span class="line"> server 172.18.0.111;</span><br><span class="line"> server 172.18.0.112;</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>2) 在主机使用浏览器访问 <a href="http://www.test.com并不断刷新" target="_blank" rel="noopener">www.test.com并不断刷新</a><br><img src="https://img-blog.csdnimg.cn/20200409165619268.png" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200409165626825.png" alt="在这里插入图片描述"></p><h4 id="使用nginx1容器,配置nginx-IP哈希轮询"><a href="#使用nginx1容器,配置nginx-IP哈希轮询" class="headerlink" title="使用nginx1容器,配置nginx IP哈希轮询"></a>使用nginx1容器,配置nginx IP哈希轮询</h4><p>1) 在容器nginx1编辑/etc/nginx/conf.d/default.conf文件,重新启动nginx服务</p><p>配置 <a href="http://www.test.com虚拟主机" target="_blank" rel="noopener">www.test.com虚拟主机</a><br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name www.test.com;</span><br><span class="line"> location / {</span><br><span class="line"> proxy_pass http://APP;</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>配置负载均衡服务器组<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">upstream APP {</span><br><span class="line"> ip_hash;</span><br><span class="line"> server 172.18.0.111;</span><br><span class="line"> server 172.18.0.112;</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>2) 在不同ip主机使用浏览器访问 <a href="http://www.test.com" target="_blank" rel="noopener">www.test.com</a><br><img src="https://img-blog.csdnimg.cn/20200409170202667.png" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200409170146589.png" alt="在这里插入图片描述"></p>]]></content>
<summary type="html">
<h2 id="网站架构"><a href="#网站架构" class="headerlink" title="网站架构"></a>网站架构</h2><p>基于Docker容器里构建高并发网站</p>
<p>拓扑图:<br><img src="https://img-blog.c
</summary>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/categories/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/tags/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="Docker" scheme="https://plutoacharon.github.io/tags/Docker/"/>
</entry>
<entry>
<title>HA高可用与负载均衡入门到实战(四)---- 配置nginx防盗链和HTTPS</title>
<link href="https://plutoacharon.github.io/2020/04/09/HA%E9%AB%98%E5%8F%AF%E7%94%A8%E4%B8%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E6%88%98%EF%BC%88%E5%9B%9B%EF%BC%89-%E9%85%8D%E7%BD%AEnginx%E9%98%B2%E7%9B%97%E9%93%BE%E5%92%8CHTTPS/"/>
<id>https://plutoacharon.github.io/2020/04/09/HA高可用与负载均衡入门到实战(四)-配置nginx防盗链和HTTPS/</id>
<published>2020-04-09T12:31:21.000Z</published>
<updated>2020-04-09T12:31:54.434Z</updated>
<content type="html"><![CDATA[<h2 id="环境要求"><a href="#环境要求" class="headerlink" title="环境要求"></a>环境要求</h2><p>vmware虚拟机双核2G内存以上<br>安装有CentOS7和docker</p><h2 id="配置nginx图片防盗链"><a href="#配置nginx图片防盗链" class="headerlink" title="配置nginx图片防盗链"></a>配置nginx图片防盗链</h2><h3 id="配置盗链网站"><a href="#配置盗链网站" class="headerlink" title="配置盗链网站"></a>配置盗链网站</h3><p>1) 启动nginx容器,设置端口映射,并进入容器<br><code>docker run -d --privileged -p 80:80 -p 443:443 nginx /usr/sbin/init</code></p><p>2) 在nginx容器中准备两个网站,配置文件<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name site1.test.com;</span><br><span class="line"> root /var/www/html/site1;</span><br><span class="line"> index index.html;</span><br><span class="line">}</span><br><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name site2.test.com;</span><br><span class="line"> root /var/www/html/site2;</span><br><span class="line"> index index.html;</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>3) 在主机编辑hosts文件<br><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">虚拟机的IP地址 site1.test.com</span><br><span class="line">虚拟机的IP地址 site2.test.com</span><br></pre></td></tr></table></figure></p><p>4) 创建/var/www/html/site1/index.html,展示自己的图片<br><figure class="highlight html"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="tag"><<span class="name">h1</span>></span>welcome to site1<span class="tag"></<span class="name">h1</span>></span></span><br><span class="line"><span class="tag"><<span class="name">img</span> <span class="attr">src</span>=<span class="string">”1.jpg”</span>></span></span><br></pre></td></tr></table></figure></p><p>从网上随便下载一张图片作为<code>1.jpg</code><br><code>wget https://www.heuet.edu.cn/images/18/03/07/2tf9v0vlbb/20150415094513422.jpg</code><br>5) 创建/var/www/html/site2/index.html,盗用site1的图片<br><figure class="highlight html"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="tag"><<span class="name">h1</span>></span>welcome to site2<span class="tag"></<span class="name">h1</span>></span></span><br><span class="line"><span class="tag"><<span class="name">img</span> <span class="attr">src</span>=<span class="string">”http://site1.test.com/1.jpg”</span>></span></span><br></pre></td></tr></table></figure></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">[root@5ef46ce6b610 /]<span class="comment"># mkdir -p /var/www/html/site1</span></span><br><span class="line">[root@5ef46ce6b610 /]<span class="comment"># mkdir -p /var/www/html/site2</span></span><br><span class="line">[root@5ef46ce6b610 /]<span class="comment"># vim /var/www/html/site1/index.html</span></span><br><span class="line">[root@5ef46ce6b610 /]<span class="comment"># vim /var/www/html/site2/index.html</span></span><br><span class="line">[root@5ef46ce6b610 /]<span class="comment"># cat /var/www/html/site1/index.html</span></span><br><span class="line"><h1>welcome to site1</h1></span><br><span class="line"><img src=<span class="string">"1.jpg"</span>></span><br><span class="line">[root@5ef46ce6b610 /]<span class="comment"># cat /var/www/html/site2/index.html</span></span><br><span class="line"><h1>welcome to site2</h1></span><br><span class="line"><img src=<span class="string">"http://site1.test.com/1.jpg"</span>></span><br><span class="line">[root@5ef46ce6b610 /]<span class="comment">#</span></span><br></pre></td></tr></table></figure><p>重启nginx服务<br><code>systemctl restart nginx</code><br>6) 在主机浏览器访问site1.test.com<br><img src="https://img-blog.csdnimg.cn/20200409142548690.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>7) 在主机浏览器访问site2.test.com<br><img src="https://img-blog.csdnimg.cn/20200409142557540.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="配置site1-test-com防盗链"><a href="#配置site1-test-com防盗链" class="headerlink" title="配置site1.test.com防盗链"></a>配置site1.test.com防盗链</h3><p>1) 在nginx容器中编辑/etc/nginx/nginx.conf文件,配置防盗链<br>配置两台虚拟主机<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br></pre></td><td class="code"><pre><span class="line"> server {</span><br><span class="line"> listen 80; </span><br><span class="line"> server_name site1.test.com;</span><br><span class="line"> </span><br><span class="line"> location / { </span><br><span class="line"> root /var/www/html/site1;</span><br><span class="line"> index index.html index.htm;</span><br><span class="line"> } </span><br><span class="line"> </span><br><span class="line"> location ~ \.(jpg|png|gif)$ {</span><br><span class="line"> valid_referers site1.test.com;</span><br><span class="line"> if ($invalid_referer) {</span><br><span class="line"> return 403; </span><br><span class="line"> } </span><br><span class="line"> } </span><br><span class="line"> } </span><br><span class="line"></span><br><span class="line"> server { </span><br><span class="line"> listen 80; </span><br><span class="line"> server_name site2.test.com; </span><br><span class="line"> location / { </span><br><span class="line"> root /var/www/html/site2; </span><br><span class="line"> index index.html index.htm; </span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p><strong>说明</strong>:<br>判断referer的值,来判断当前图片的引用是否合法,一旦检测到来源不是本站,就立即阻止图片的发送,或换成一张禁止防盗链提示的图片<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">location ~ \.(jpg|png|gif)$ {</span><br><span class="line"> valid_referers site1.test.com;</span><br><span class="line"> if ($invalid_referer) {</span><br><span class="line"> return 403; </span><br><span class="line"> } </span><br><span class="line"> }</span><br></pre></td></tr></table></figure></p><ul><li>第1行配置,用于匹配文件扩展名为gif、jpg、png、swf、flv的资源</li><li>第2行中的<code>valid_referers</code>指令用于设置允许访问资源的网站列表(即白名单)。当请求消息头中的<code>referer</code>符合白名单时,内置变量<code>$invalid_referer</code>的值为空字符串,否则为1</li><li>第3~5行的配置,可以禁止白名单之外的网站访问资源,并返回403状态码</li></ul><p>2) 在主机使用浏览器访问site1.test.com<br><img src="https://img-blog.csdnimg.cn/20200409145300851.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>3) 在主机使用浏览器访问site2.test.com</p><p>如果测试仍然显示图片,是因为浏览器还有上次访问的缓存<br>建议更换浏览器,或者清理缓存<br><img src="https://img-blog.csdnimg.cn/20200409145242984.png" alt="在这里插入图片描述"></p><h2 id="配置nginx的HTTPS网站"><a href="#配置nginx的HTTPS网站" class="headerlink" title="配置nginx的HTTPS网站"></a>配置nginx的HTTPS网站</h2><h3 id="颁发网站认证证书"><a href="#颁发网站认证证书" class="headerlink" title="颁发网站认证证书"></a>颁发网站认证证书</h3><p>1) 在nginx容器中检查系统安装了openssl<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">[root@5ef46ce6b610 site1]<span class="comment"># rpm -qa | grep ssl</span></span><br><span class="line">openssl-libs-1.1.1c-2.el8.x86_64</span><br><span class="line">openssl-1.1.1c-2.el8.x86_64</span><br><span class="line">[root@5ef46ce6b610 site1]<span class="comment">#</span></span><br></pre></td></tr></table></figure></p><p>2) 建立/etc/nginx/ssl目录,并生成服务器RSA私钥<br><code>openssl genrsa -out server.key 2048</code></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br></pre></td><td class="code"><pre><span class="line">[root@5ef46ce6b610 site1]<span class="comment"># mkdir /etc/nginx/ssl</span></span><br><span class="line">[root@5ef46ce6b610 site1]<span class="comment"># cd /etc/nginx/ssl/</span></span><br><span class="line">[root@5ef46ce6b610 ssl]<span class="comment"># </span></span><br><span class="line">[root@5ef46ce6b610 ssl]<span class="comment"># ls</span></span><br><span class="line">[root@5ef46ce6b610 ssl]<span class="comment"># openssl genrsa -out server.key 2048</span></span><br><span class="line">Generating RSA private key, 2048 bit long modulus (2 primes)</span><br><span class="line">..............+++++</span><br><span class="line">.....................................................................................................................................................................................+++++</span><br><span class="line">e is 65537 (0x010001)</span><br><span class="line">[root@5ef46ce6b610 ssl]<span class="comment"># ls</span></span><br><span class="line">server.key</span><br><span class="line">[root@5ef46ce6b610 ssl]<span class="comment"># cat server.key </span></span><br><span class="line">-----BEGIN RSA PRIVATE KEY-----</span><br><span class="line">MIIEowIBAAKCAQEAmfzZmG91slq8a5CNaNF28QFSyBxh4AO6ztTCzKpPXoxCNANm</span><br><span class="line">8BnmWLd3EIwBZCYWh6JUrfc8HxTATbWyHw0KqE6rEDlui15bg9crtu1xrD8XIbpg</span><br><span class="line">/uzfDDemg9evxdXqQ4PjcbojxRkkuKF8OSyaG38Z8K0Fd9ZXbbX4Hy0rGhDrPIby</span><br><span class="line">WcZBfOH4V6MpPEM703p7yfGOd/TAxz9j9TgTj8hzsTL1qGHz/g8x75Ok+Sukmln6</span><br><span class="line">c3m9Vq2W5EiN0oo/uFJGvs3QmiF7wbWAdFFOTnjJ7C6XYFmzizI6oNCShegQMgoK</span><br><span class="line">Z2G4fTnidnNKD9gKn8Qz3HWTf7Y6VMflqY3Y2QIDAQABAoIBAEo6wQnicPIRG1Me</span><br><span class="line">04v7rUJwSN9+DxBVu++IUH8oeioxophAK5cCZS/PAO5RDzqfwayQbBGQZML21dyg</span><br><span class="line">AcVGHCUWBxBDHy6/xY3AY6pCu9E0eIohtjAtLzhMe1CC4JCVleAF69YezK9ud20p</span><br><span class="line">KyDEh2VJ189VGJW0FWElnv4oX+aoEcirH+nTgxtYLh1AOE5Ts9MExSokl+g4u8CY</span><br><span class="line">3k/qNXz+0RfqUAoUgue8BPAo3xqY1EXiL0kAkch73ipAW++Nd8y53M7Pcjxh+xSl</span><br><span class="line">BSBBr1xE4dEQyZiYwMUxwdYVwChTr28T2hFIbc2SO1J5h0KsG5+HxiPJgPjNAsMC</span><br><span class="line">gjL5PIECgYEAymcXhZOfOXHTl8CANGM981Pp/i1QtjDpTehCOFidlucL2WKSfx9O</span><br><span class="line">kwClI0oR5DnAt9nieQrjRIVNpAhsqc8DaFEDCqDj6snUEu4lTMHmUNhiEeYgl01X</span><br><span class="line">UYP+xUf68b4VSWqZL9Lnf50Mo9ztTEcRaJPITYBKUV0IoXpgfpSwVP0CgYEAwsOy</span><br><span class="line">pTKk8wx0qWlRU8P7/t2fdtBYe1eR/xR/i50Nkzmna4M9BP0UDTMZcKFgLcIC4yr9</span><br><span class="line">Q/3zBYw5UrAMVK7aFX1WSdrIohqPNnejcOpO28SfxxR9amDpaeOGo4E5ijyLeB9B</span><br><span class="line">EUCBv9xaCE0/gsY4atmWE4PpGMHoj3QABuY6KA0CgYEAwLYdiDpBDRHatA8+QiMH</span><br><span class="line">til8jl0ZDw9M47ezbTC6gxZjisw2zcDCMGcZ1JrOpC1019glsLf0IaaGgRrgU2He</span><br><span class="line">TbFsou8DcuZN/OQwMYAgyXLtFTu2ZjjmXZ++sJnTTd59KBTN2+IENtYSVeahLdIw</span><br><span class="line">uhCTU29F02gwModxXrQ1nAUCgYBHSDyv/ZMlaV+hSWx8jfRC2XYtlB9uNSS4CRaN</span><br><span class="line">UJPRWH6P+N5yXvXhxtv+vvFmjeVkoy1Cn0U8uI+aVdiNfdlPmCnmqe5YdgQIWU02</span><br><span class="line">XGs0QAiCYltsfb+wA5gZa4hVsccR1c6Is+VJBSrmcu9Vu5qWcMBesB618vJc3oXM</span><br><span class="line">AKM0WQKBgHqQX7HKA3g2UarzwBLpPugQobmU8ku4cvUF2n1ZkL6mq4BuXwKf3UPT</span><br><span class="line">wjVkFLFG+OcI+NmPB1NzU0szXYExBVSIYHHizQ8sX1ILUYOzLzJVGtWL221kiPhk</span><br><span class="line">ldheBAdeoRY7yfo0OcaHxPFmgQJPqqgISTFSlTRy+r1AFsqPr1D0</span><br><span class="line">-----END RSA PRIVATE KEY-----</span><br></pre></td></tr></table></figure><p>3) 生成服务器CSR证书请求文件,注意<strong>站点名称要一致</strong></p><p><code>openssl req -new -key server.key -out server.csr</code><br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br></pre></td><td class="code"><pre><span class="line">[root@5ef46ce6b610 ssl]<span class="comment"># openssl req -new -key server.key -out server.csr</span></span><br><span class="line">You are about to be asked to enter information that will be incorporated</span><br><span class="line">into your certificate request.</span><br><span class="line">What you are about to enter is what is called a Distinguished Name or a DN.</span><br><span class="line">There are quite a few fields but you can leave some blank</span><br><span class="line">For some fields there will be a default value,</span><br><span class="line">If you enter <span class="string">'.'</span>, the field will be left blank.</span><br><span class="line">-----</span><br><span class="line">Country Name (2 letter code) [XX]:CN</span><br><span class="line">State or Province Name (full name) []:Hebei</span><br><span class="line">Locality Name (eg, city) [Default City]:Shijiazhuang </span><br><span class="line">Organization Name (eg, company) [Default Company Ltd]:It</span><br><span class="line">Organizational Unit Name (eg, section) []:www.test.com</span><br><span class="line">Common Name (eg, your name or your server<span class="string">'s hostname) []:test@qq.com</span></span><br><span class="line"><span class="string">Email Address []:test@qq.com</span></span><br><span class="line"><span class="string"></span></span><br><span class="line"><span class="string">Please enter the following '</span>extra<span class="string">' attributes</span></span><br><span class="line"><span class="string">to be sent with your certificate request</span></span><br><span class="line"><span class="string">A challenge password []:</span></span><br><span class="line"><span class="string">An optional company name []:</span></span><br><span class="line"><span class="string">[root@5ef46ce6b610 ssl]# ls</span></span><br><span class="line"><span class="string">server.csr server.key</span></span><br><span class="line"><span class="string">[root@5ef46ce6b610 ssl]# cat server.csr </span></span><br><span class="line"><span class="string">-----BEGIN CERTIFICATE REQUEST-----</span></span><br><span class="line"><span class="string">MIIC0DCCAbgCAQAwgYoxCzAJBgNVBAYTAkNOMQ4wDAYDVQQIDAVIZWJlaTEVMBMG</span></span><br><span class="line"><span class="string">A1UEBwwMU2hpamlhemh1YW5nMQswCQYDVQQKDAJJdDEVMBMGA1UECwwMd3d3LnRl</span></span><br><span class="line"><span class="string">c3QuY29tMRQwEgYDVQQDDAt0ZXN0QHFxLmNvbTEaMBgGCSqGSIb3DQEJARYLdGVz</span></span><br><span class="line"><span class="string">dEBxcS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCZ/NmYb3Wy</span></span><br><span class="line"><span class="string">WrxrkI1o0XbxAVLIHGHgA7rO1MLMqk9ejEI0A2bwGeZYt3cQjAFkJhaHolSt9zwf</span></span><br><span class="line"><span class="string">FMBNtbIfDQqoTqsQOW6LXluD1yu27XGsPxchumD+7N8MN6aD16/F1epDg+NxuiPF</span></span><br><span class="line"><span class="string">GSS4oXw5LJobfxnwrQV31ldttfgfLSsaEOs8hvJZxkF84fhXoyk8QzvTenvJ8Y53</span></span><br><span class="line"><span class="string">9MDHP2P1OBOPyHOxMvWoYfP+DzHvk6T5K6SaWfpzeb1WrZbkSI3Sij+4Uka+zdCa</span></span><br><span class="line"><span class="string">IXvBtYB0UU5OeMnsLpdgWbOLMjqg0JKF6BAyCgpnYbh9OeJ2c0oP2AqfxDPcdZN/</span></span><br><span class="line"><span class="string">tjpUx+WpjdjZAgMBAAGgADANBgkqhkiG9w0BAQsFAAOCAQEAGTlfc6+S5ptsyJ47</span></span><br><span class="line"><span class="string">lN8+neD6+9wX+5zomp3TUHbikSAdUvwNHnZJb2M3Mrg5q+Lde9MLj0W3rlVNx8Sr</span></span><br><span class="line"><span class="string">4OMVvO/f/C/cUp0r6Qn2RRUtP9HRCthuQTP+61cXr8WUpOjcbnr6VE2tJ285KdU2</span></span><br><span class="line"><span class="string">uR9ODTwfl5iP6hwyBlXLkDohhDuGAYlEL93yt3OzCTddeVFqklhD5cAlWX3s+pqm</span></span><br><span class="line"><span class="string">Xzv70KUy68rCL1YDjgXX6u6QZ+63z+pmQoXv/Bk6JYUAqalKeeQH/VtHGwaJ6UuP</span></span><br><span class="line"><span class="string">QF40i8ffeFuk8ZmgCB1jm57MPR1oyorgI72063wE6cvrf0OLFSCJfufyab5mvzV/</span></span><br><span class="line"><span class="string">bNjXbQ==</span></span><br><span class="line"><span class="string">-----END CERTIFICATE REQUEST-----</span></span><br></pre></td></tr></table></figure></p><p>4) 生成服务器认证证书<br><code>openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt</code></p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br></pre></td><td class="code"><pre><span class="line">[root@5ef46ce6b610 ssl]<span class="comment"># openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt</span></span><br><span class="line">Signature ok</span><br><span class="line">subject=C = CN, ST = Hebei, L = Shijiazhuang, O = It, OU = www.test.com, CN = <span class="built_in">test</span>@qq.com, emailAddress = <span class="built_in">test</span>@qq.com</span><br><span class="line">Getting Private key</span><br><span class="line">[root@5ef46ce6b610 ssl]<span class="comment"># ls</span></span><br><span class="line">server.crt server.csrserver.key</span><br><span class="line">[root@5ef46ce6b610 ssl]<span class="comment"># cat server.crt </span></span><br><span class="line">-----BEGIN CERTIFICATE-----</span><br><span class="line">MIIDnTCCAoUCFDma9qKjZRh7KOsFlB/xS+FVG7xJMA0GCSqGSIb3DQEBCwUAMIGK</span><br><span class="line">MQswCQYDVQQGEwJDTjEOMAwGA1UECAwFSGViZWkxFTATBgNVBAcMDFNoaWppYXpo</span><br><span class="line">dWFuZzELMAkGA1UECgwCSXQxFTATBgNVBAsMDHd3dy50ZXN0LmNvbTEUMBIGA1UE</span><br><span class="line">AwwLdGVzdEBxcS5jb20xGjAYBgkqhkiG9w0BCQEWC3Rlc3RAcXEuY29tMB4XDTIw</span><br><span class="line">MDQwOTA3MTcxNloXDTIxMDQwOTA3MTcxNlowgYoxCzAJBgNVBAYTAkNOMQ4wDAYD</span><br><span class="line">VQQIDAVIZWJlaTEVMBMGA1UEBwwMU2hpamlhemh1YW5nMQswCQYDVQQKDAJJdDEV</span><br><span class="line">MBMGA1UECwwMd3d3LnRlc3QuY29tMRQwEgYDVQQDDAt0ZXN0QHFxLmNvbTEaMBgG</span><br><span class="line">CSqGSIb3DQEJARYLdGVzdEBxcS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw</span><br><span class="line">ggEKAoIBAQCZ/NmYb3WyWrxrkI1o0XbxAVLIHGHgA7rO1MLMqk9ejEI0A2bwGeZY</span><br><span class="line">t3cQjAFkJhaHolSt9zwfFMBNtbIfDQqoTqsQOW6LXluD1yu27XGsPxchumD+7N8M</span><br><span class="line">N6aD16/F1epDg+NxuiPFGSS4oXw5LJobfxnwrQV31ldttfgfLSsaEOs8hvJZxkF8</span><br><span class="line">4fhXoyk8QzvTenvJ8Y539MDHP2P1OBOPyHOxMvWoYfP+DzHvk6T5K6SaWfpzeb1W</span><br><span class="line">rZbkSI3Sij+4Uka+zdCaIXvBtYB0UU5OeMnsLpdgWbOLMjqg0JKF6BAyCgpnYbh9</span><br><span class="line">OeJ2c0oP2AqfxDPcdZN/tjpUx+WpjdjZAgMBAAEwDQYJKoZIhvcNAQELBQADggEB</span><br><span class="line">AFrdSAQ4MM6sHUZWKJ2YzcXUjt/kG+h23itQ0uF4OqW05U0pSFCf6iG/SVtC9TIh</span><br><span class="line">z76uih7Nk2NwJ5IPfyYJfM+CXLf2vxv8y9QuA8D9dWQqMcliOl1XI3E36mK9mMfj</span><br><span class="line">+x7TCaNbq02AvlYVyp9Ex7SwI8zfn54i34uM9+OhJGWWeGDKDzNtjQSQzlM0NAuP</span><br><span class="line">i/WzDgNbl+ve27WHI9pXWAytLoEoh7NND5fKBLoqqK3Urky1vaL1YPv+MSIQ56Nr</span><br><span class="line">uLQ8Yxqz3TH0y/wNJVE3BSZvayTeP5bvLWVU8jHLWZSRQelx++UpNFEtD/nALJAJ</span><br><span class="line">e1BLIz/apbR6z4cmpvZoGLQ=</span><br><span class="line">-----END CERTIFICATE-----</span><br></pre></td></tr></table></figure><h3 id="配置HTTPS网站"><a href="#配置HTTPS网站" class="headerlink" title="配置HTTPS网站"></a>配置HTTPS网站</h3><p>1) 在主机编辑hosts文件,并使用ping命令检查<br> <code>虚拟机的IP地址 www.test.com</code></p><p>2) 编辑/etc/nginx/nginx.conf文件,配置HTTPS站点<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 443;</span><br><span class="line"> server_name www.test.com;</span><br><span class="line"> root /var/www/html;</span><br><span class="line"> ssl on;</span><br><span class="line"> ssl_certificate /etc/nginx/ssl/server.crt;</span><br><span class="line"> ssl_certificate_key /etc/nginx/ssl/server.key;</span><br><span class="line"> location / {</span><br><span class="line"> index index.html;</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>3) 编辑 /var/www/html/index.html,重载nginx<br><figure class="highlight html"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="tag"><<span class="name">h1</span>></span>this is https site!!<span class="tag"></<span class="name">h1</span>></span></span><br></pre></td></tr></table></figure></p><p><code>systemctl restart nginx</code></p><p>4) 重载nginx,在主机使用浏览器访问 <strong><a href="https://www.test.com" target="_blank" rel="noopener">https://www.test.com</a></strong><br> <img src="https://img-blog.csdnimg.cn/2020040915302657.png" alt="在这里插入图片描述"><br>5) 在浏览器中查看网站证书,<br><img src="https://img-blog.csdnimg.cn/20200409153032840.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p>]]></content>
<summary type="html">
<h2 id="环境要求"><a href="#环境要求" class="headerlink" title="环境要求"></a>环境要求</h2><p>vmware虚拟机双核2G内存以上<br>安装有CentOS7和docker</p>
<h2 id="配置nginx图片防盗
</summary>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/categories/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/tags/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="Docker" scheme="https://plutoacharon.github.io/tags/Docker/"/>
</entry>
<entry>
<title>HA高可用与负载均衡入门到实战(一)----Docker中安装与配置Nginx</title>
<link href="https://plutoacharon.github.io/2020/04/09/HA%E9%AB%98%E5%8F%AF%E7%94%A8%E4%B8%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E6%88%98%EF%BC%88%E4%B8%80%EF%BC%89----Docker%E4%B8%AD%E5%AE%89%E8%A3%85%E4%B8%8E%E9%85%8D%E7%BD%AENginx%20/"/>
<id>https://plutoacharon.github.io/2020/04/09/HA高可用与负载均衡入门到实战(一)----Docker中安装与配置Nginx /</id>
<published>2020-04-09T12:30:30.000Z</published>
<updated>2020-04-09T12:30:36.250Z</updated>
<content type="html"><![CDATA[<h2 id="实现Docker容器中安装配置Nginx"><a href="#实现Docker容器中安装配置Nginx" class="headerlink" title="实现Docker容器中安装配置Nginx"></a>实现Docker容器中安装配置Nginx</h2><h3 id="1-启动进入容器"><a href="#1-启动进入容器" class="headerlink" title="1. 启动进入容器"></a>1. 启动进入容器</h3><p><strong>1.1 拉取centos镜像:</strong><br><code>docker pull centos</code></p><blockquote><p>注意: 这样拉取的是最新的centos8镜像,如果想要拉取centos7则使用<code>docker pull centos:7</code></p></blockquote><p><strong>1.2 启动进入容器</strong><br><code>docker run -d --privileged --name nginx centos:v1 /usr/sbin/init</code><br>我这里起名为<code>myNginx</code>, 名字都可以只要不和其他容器冲突就行<br><img src="https://img-blog.csdnimg.cn/20200319122650212.png" alt="在这里插入图片描述"><br><code>docker exec -it 容器ID /bin/bash</code>进入容器<br><img src="https://img-blog.csdnimg.cn/20200319122659668.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="2-在容器中安装Nginx"><a href="#2-在容器中安装Nginx" class="headerlink" title="2. 在容器中安装Nginx"></a>2. 在容器中安装Nginx</h3><p><strong>2.1 在容器中编辑/etc/yum.repos.d/nginx.repo设置yum源</strong></p><pre><code>[nginx]name=nginx repobaseurl=http://nginx.org/packages/centos/$releasever/$basearch/gpgcheck=0enabled=1</code></pre><p><strong>2.2 yum install -y nginx安装</strong><br><img src="https://img-blog.csdnimg.cn/20200319122936842.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><strong>2.3 启动nginx</strong></p><pre><code>systemctl start nginxsystemctl enable nginx #设置开机自动启动</code></pre><p><strong>2.4 保存容器</strong><br><code>docker commit 容器ID nginx</code><br><img src="https://img-blog.csdnimg.cn/20200319123051761.png" alt="在这里插入图片描述"></p><h3 id="3-启动Nginx"><a href="#3-启动Nginx" class="headerlink" title="3. 启动Nginx"></a>3. 启动Nginx</h3><p><strong>3.1 启动容器</strong><br><code>docker run -d -p 80:80 --privileged nginx /usr/sbin/init</code><br><img src="https://img-blog.csdnimg.cn/20200319123202712.png" alt="在这里插入图片描述"></p><p><strong>3.2 主机中使用浏览器访问虚拟机IP地址</strong><br><img src="https://img-blog.csdnimg.cn/20200319123229811.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="4-配置nginx主目录"><a href="#4-配置nginx主目录" class="headerlink" title="4. 配置nginx主目录"></a>4. 配置nginx主目录</h3><p><strong>4.1 进入nginx容器,查看/etc/nginx/nginx.conf文件</strong><br>更改root目录<br><img src="https://img-blog.csdnimg.cn/20200319123432591.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><strong>4.2 建立/var/webroot/www目录</strong></p><p>编辑index.html文件<br><img src="https://img-blog.csdnimg.cn/20200319123534990.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><strong>4.3 重新启动nginx服务并在主机使用浏览器访问</strong><br><img src="https://img-blog.csdnimg.cn/20200319123609739.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="5-配置Nginx错误页重定向"><a href="#5-配置Nginx错误页重定向" class="headerlink" title="5. 配置Nginx错误页重定向"></a>5. 配置Nginx错误页重定向</h3><p><strong>5.1 编辑/etc/nginx/nginx.conf文件,配置error_page指令指定404页面</strong><br><img src="https://img-blog.csdnimg.cn/2020031912371967.png" alt="在这里插入图片描述"><br><strong>5.2 在/var/webroot/www目录,编辑404.html文件</strong><br><img src="https://img-blog.csdnimg.cn/20200319123744306.png" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200319123755113.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><strong>5.3 配置error_page 404 =200更改响应状态码</strong><br><img src="https://img-blog.csdnimg.cn/20200319123820117.png" alt="在这里插入图片描述"><br>重启服务<br><img src="https://img-blog.csdnimg.cn/2020031912382874.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="6-配置nginx访问控制权"><a href="#6-配置nginx访问控制权" class="headerlink" title="6. 配置nginx访问控制权"></a>6. 配置nginx访问控制权</h3><p><strong>6.1 在server块内增加deny all指令</strong><br><img src="https://img-blog.csdnimg.cn/20200319123935234.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200319123949645.png" alt="在这里插入图片描述"><br><strong>6.2 使用location块进行访问控制</strong><br><img src="https://img-blog.csdnimg.cn/20200319124034356.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200319124101805.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200319124049615.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><p>可以看到,当直接访问<code>直接访问http://192.168.0.131/</code>deny拒绝<br>访问<code>访问index.html页面</code>allow允许</p><h4 id="原因"><a href="#原因" class="headerlink" title="原因:"></a>原因:</h4><p><code>http://192.168.0.131/</code>的结果是 <code>403 Forbidden</code>,说明被匹配到<code>location / {..deny all;}</code>了</p><p>原因很简单HTTP 请求 GET / 被“严格精确”匹配到了普通<code>location / {}</code> ,则会停止搜索正则<code>location ;</code></p><p><code>http://192.168.0.131/index.html</code> 结果是之前设置的index页面,说明没有被<code>location / {…deny all;}</code>匹配,否则会 403 Forbidden。</p><p>但 /index.html 的确也是以“ / ”开头的,只不过此时的普通<code>location /</code>的匹配结果是<code>最大前缀</code>匹配,所以 Nginx 会继续搜索正则<code>location , location ~ \.html$</code>表达了以<code>.html</code>结尾的都<code>allow all</code>; 于是接着就访问到了实际存在的<code>index.html</code>页面。</p>]]></content>
<summary type="html">
<h2 id="实现Docker容器中安装配置Nginx"><a href="#实现Docker容器中安装配置Nginx" class="headerlink" title="实现Docker容器中安装配置Nginx"></a>实现Docker容器中安装配置Nginx</h2><
</summary>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/categories/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/tags/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="Docker" scheme="https://plutoacharon.github.io/tags/Docker/"/>
</entry>
<entry>
<title>HA高可用与负载均衡入门到实战(一)----Docker中安装与配置Nginx</title>
<link href="https://plutoacharon.github.io/2020/04/09/HA%E9%AB%98%E5%8F%AF%E7%94%A8%E4%B8%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E6%88%98%EF%BC%88%E4%BA%8C%EF%BC%89----%E6%97%A5%E5%BF%97%E5%92%8C%E9%85%8D%E7%BD%AENginx%E8%99%9A%E6%8B%9F%E4%B8%BB%E6%9C%BA/"/>
<id>https://plutoacharon.github.io/2020/04/09/HA高可用与负载均衡入门到实战(二)----日志和配置Nginx虚拟主机/</id>
<published>2020-04-09T12:30:30.000Z</published>
<updated>2020-04-09T12:30:45.331Z</updated>
<content type="html"><![CDATA[<h2 id="实验环境"><a href="#实验环境" class="headerlink" title="实验环境"></a>实验环境</h2><p>vmware虚拟机双核2G内存以上<br>安装有CentOS7和docker</p><h2 id="查看与管理nginx日志"><a href="#查看与管理nginx日志" class="headerlink" title="查看与管理nginx日志"></a>查看与管理nginx日志</h2><h3 id="启用nginx容器"><a href="#启用nginx容器" class="headerlink" title="启用nginx容器"></a>启用nginx容器</h3><ol><li>启动容器docker run -d –privileged -p 80:80 nginx /usr/sbin/init</li><li>查看容器docker ps</li><li>进入容器docker exec -it 容器ID /bin/bash<br><img src="https://img-blog.csdnimg.cn/2020040911025214.png" alt="在这里插入图片描述"><br><img src="https://img-blog.csdnimg.cn/20200409110300595.png" alt="在这里插入图片描述"><h3 id="配置nginx日志"><a href="#配置nginx日志" class="headerlink" title="配置nginx日志"></a>配置nginx日志</h3>1) 打开/etc/nginx/nginx.conf文件,查看log_format与access_log的配置<br>2) 配置日志文件存放位置/var/log/nginx/access.log<br><img src="https://img-blog.csdnimg.cn/20200409110404766.png" alt="在这里插入图片描述"><br>3) 使用浏览器访问nginx并查看日志记录<br><img src="https://img-blog.csdnimg.cn/20200409110422861.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><br>4) 打开/etc/nginx/nginx.conf文件,查看error_log的配置<br><img src="https://img-blog.csdnimg.cn/20200409110454962.png" alt="在这里插入图片描述"></li></ol><p>5) 使用浏览器访问nginx并查看错误日志记录<br><img src="https://img-blog.csdnimg.cn/20200409112245758.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"><img src="https://img-blog.csdnimg.cn/20200409112253725.png" alt="在这里插入图片描述"></p><h3 id="配置nginx日志文件切割"><a href="#配置nginx日志文件切割" class="headerlink" title="配置nginx日志文件切割"></a>配置nginx日志文件切割</h3><p>1) 编写shell脚本/var/log/nginx/autolog.sh,自动备份前一天的日志<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#!/bin/bash</span></span><br><span class="line"><span class="comment">#nginx日志存放的目录</span></span><br><span class="line">log_path=”/var/<span class="built_in">log</span>/nginx”</span><br><span class="line"><span class="comment">#备份日志文件</span></span><br><span class="line">mv <span class="variable">$log_path</span>/access.log <span class="variable">$log_path</span>/`date +<span class="string">"%Y%m%d%H%M"</span>`.<span class="built_in">log</span></span><br><span class="line"><span class="comment">#重新打开nginx日志文件</span></span><br><span class="line">nginx -s reopen</span><br></pre></td></tr></table></figure></p><ol start="2"><li>赋予权限755,并执行<br><img src="https://img-blog.csdnimg.cn/20200409112642620.png" alt="在这里插入图片描述"><br>3) 设置定时任务,每天零点零分自动执行脚本<figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">crontab -e</span><br><span class="line">0 0 * * * /var/<span class="built_in">log</span>/nginx/autolog.sh >/dev/null 2>&1</span><br></pre></td></tr></table></figure></li></ol><p>4) 查看定时任务<br><code>crontab -l</code></p><h2 id="配置nginx虚拟主机"><a href="#配置nginx虚拟主机" class="headerlink" title="配置nginx虚拟主机"></a>配置nginx虚拟主机</h2><h3 id="配置-虚拟主机站点文件"><a href="#配置-虚拟主机站点文件" class="headerlink" title="配置 虚拟主机站点文件"></a>配置 虚拟主机站点文件</h3><p>1) 建立/var/webroot/site1和/var/webroot/site2目录<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">mkdir /var/webroot</span><br><span class="line">mkdir /var/webroot/site1</span><br><span class="line">mkdir /var/webroot/site2</span><br></pre></td></tr></table></figure></p><p>2) 在两个目录下新建index.html文件,内容分别为site1和site2;<br><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="built_in">echo</span> -e <span class="string">"site1"</span> >> /var/wwwroot/site1/index.html</span><br><span class="line"><span class="built_in">echo</span> -e <span class="string">"site2"</span> >> /var/wwwroot/site2/index.html</span><br></pre></td></tr></table></figure></p><h3 id="配置基于端口的虚拟主机"><a href="#配置基于端口的虚拟主机" class="headerlink" title="配置基于端口的虚拟主机"></a>配置基于端口的虚拟主机</h3><ol><li>编辑nginx配置文件</li></ol><p>vim /etc/nginx/conf.d/vhosts.conf</p><ol start="2"><li>添加以下内容<figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 8081;</span><br><span class="line"> root /var/webroot/site1;</span><br><span class="line"> index index.html;</span><br><span class="line"></span><br><span class="line"> location / {</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line">server {</span><br><span class="line"> listen 8082;</span><br><span class="line"> root /var/webroot/site2;</span><br><span class="line"> index index.html;</span><br><span class="line"></span><br><span class="line"> location / {</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></li></ol><h4 id="启动nginx服务"><a href="#启动nginx服务" class="headerlink" title="启动nginx服务"></a>启动nginx服务</h4><p><code>systemctl restart nginx</code></p><h4 id="在宿主机访问两个站点"><a href="#在宿主机访问两个站点" class="headerlink" title="在宿主机访问两个站点"></a>在宿主机访问两个站点</h4><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">http://ip地址:8081/</span><br><span class="line">http://ip地址:8082/</span><br></pre></td></tr></table></figure><h3 id="配置基于域名的虚拟主机"><a href="#配置基于域名的虚拟主机" class="headerlink" title="配置基于域名的虚拟主机"></a>配置基于域名的虚拟主机</h3><h4 id="在主机编辑C-Windows-System32-drivers-etc-hosts文件"><a href="#在主机编辑C-Windows-System32-drivers-etc-hosts文件" class="headerlink" title="在主机编辑C:\Windows\System32\drivers\etc\hosts文件"></a>在主机编辑C:\Windows\System32\drivers\etc\hosts文件</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">虚拟机地址 site1.test.com</span><br><span class="line">虚拟机地址 site2.test.com</span><br></pre></td></tr></table></figure><h4 id="编辑-etc-nginx-conf-d-virtual-conf文件,配置基于名字的虚拟主机"><a href="#编辑-etc-nginx-conf-d-virtual-conf文件,配置基于名字的虚拟主机" class="headerlink" title="编辑/etc/nginx/conf.d/virtual.conf文件,配置基于名字的虚拟主机"></a>编辑/etc/nginx/conf.d/virtual.conf文件,配置基于名字的虚拟主机</h4><p>删除原内容,重新添加以下内容</p><figure class="highlight bash"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name site1.test.com;</span><br><span class="line"> root /var/webroot/site1;</span><br><span class="line"> index index.html;</span><br><span class="line"></span><br><span class="line"> location / {</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name site2.test.com;</span><br><span class="line"> root /var/webroot/site2;</span><br><span class="line"> index index.html;</span><br><span class="line"></span><br><span class="line"> location / {</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><h4 id="重启nginx服务"><a href="#重启nginx服务" class="headerlink" title="重启nginx服务"></a>重启nginx服务</h4><p><code>systemctl restart nginx</code></p><h4 id="访问站点"><a href="#访问站点" class="headerlink" title="访问站点"></a>访问站点</h4><p><img src="https://img-blog.csdnimg.cn/20200409122430662.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p>]]></content>
<summary type="html">
<h2 id="实验环境"><a href="#实验环境" class="headerlink" title="实验环境"></a>实验环境</h2><p>vmware虚拟机双核2G内存以上<br>安装有CentOS7和docker</p>
<h2 id="查看与管理nginx日
</summary>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/categories/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/tags/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="Docker" scheme="https://plutoacharon.github.io/tags/Docker/"/>
</entry>
<entry>
<title>HA高可用与负载均衡入门到实战(一)----Docker中安装与配置Nginx</title>
<link href="https://plutoacharon.github.io/2020/04/09/HA%E9%AB%98%E5%8F%AF%E7%94%A8%E4%B8%8E%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1%E5%85%A5%E9%97%A8%E5%88%B0%E5%AE%9E%E6%88%98%EF%BC%88%E4%B8%89%EF%BC%89----%20%E9%85%8D%E7%BD%AENginx%E6%94%AF%E6%8C%81PHP%E5%B9%B6%E5%AE%9E%E7%8E%B0%E5%8A%A8%E9%9D%99%E5%88%86%E7%A6%BB/"/>
<id>https://plutoacharon.github.io/2020/04/09/HA高可用与负载均衡入门到实战(三)---- 配置Nginx支持PHP并实现动静分离/</id>
<published>2020-04-09T12:30:30.000Z</published>
<updated>2020-04-09T12:30:51.705Z</updated>
<content type="html"><![CDATA[<h2 id="实验环境"><a href="#实验环境" class="headerlink" title="实验环境"></a>实验环境</h2><p>vmware虚拟机双核2G内存以上<br>安装有CentOS7和docker</p><h2 id="配置nginx支持php"><a href="#配置nginx支持php" class="headerlink" title="配置nginx支持php"></a>配置nginx支持php</h2><h3 id="启动进入容器nginx"><a href="#启动进入容器nginx" class="headerlink" title="启动进入容器nginx"></a>启动进入容器nginx</h3><ol><li>启动容器docker run -d –privileged -p 80:80 nginx /usr/sbin/init<br><img src="https://img-blog.csdnimg.cn/20200409130628512.png" alt="在这里插入图片描述"><br>2) 查看容器docker ps<br><img src="https://img-blog.csdnimg.cn/20200409130650454.png" alt="在这里插入图片描述"><br>3) 进入容器docker exec -it 容器ID /bin/bash<br><img src="https://img-blog.csdnimg.cn/20200409130658431.png" alt="在这里插入图片描述"><h3 id="使用yum方式安装php-fpm"><a href="#使用yum方式安装php-fpm" class="headerlink" title="使用yum方式安装php-fpm"></a>使用yum方式安装php-fpm</h3></li></ol><p>1) 使用yum 方式安装php-fpm</p><p>2) 查看php-fpm配置文件:/etc/php-fpm.conf和/etc/php-fpm.d/<a href="http://www.conf" target="_blank" rel="noopener">www.conf</a></p><p>3) 编辑/etc/php-fpm.d/<a href="http://www.conf,修改监听地址和端口" target="_blank" rel="noopener">www.conf,修改监听地址和端口</a><br> <img src="https://img-blog.csdnimg.cn/20200409130800437.png" alt="在这里插入图片描述"></p><p>4) 启动php-fpm,systemctl start php-fpm</p><p>5) 配置php-fpm自启动,systemctl enable php-fpm</p><p>6) netstat -antp,查看php-fpm监听端口;<br> <img src="https://img-blog.csdnimg.cn/20200409130837812.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="配置nginx支持php-1"><a href="#配置nginx支持php-1" class="headerlink" title="配置nginx支持php"></a>配置nginx支持php</h3><p>1) 编辑/etc/nginx/nginx.conf文件, 重新启动nginx服务</p><p>删除原有server代码块</p><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name localhost;</span><br><span class="line"> location / {</span><br><span class="line"> root /var/www;</span><br><span class="line"> index index.html;</span><br><span class="line"> }</span><br><span class="line">location ~ \.php$ {</span><br><span class="line"> fastcgi_pass 127.0.0.1:9000;</span><br><span class="line"> fastcgi_index index.php;</span><br><span class="line"> fastcgi_param SCRIPT_FILENAME /var/www/$fastcgi_script_name;</span><br><span class="line"> include fastcgi_params;</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p><strong>关于FastCGI</strong>:</p><p>请求处理流程:CGI规范允许Web服务器根据浏览器请求调用CGI程序,并将其输出结果通过响应发送给浏览器,从而使Web服务器支持处理复杂的网站业务需求<br><img src="https://img-blog.csdnimg.cn/20200409133304831.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><p>2) 在/var/www目录下建立index.php文件<br><figure class="highlight php"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta"><?php</span></span><br><span class="line"> phpinfo();</span><br><span class="line"><span class="meta">?></span></span><br></pre></td></tr></table></figure></p><p>3) 在主机中使用浏览器访问http://虚拟机地址/index.php</p><p><img src="https://img-blog.csdnimg.cn/20200409132134974.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h2 id="配置Nginx-Apache实现动静分离"><a href="#配置Nginx-Apache实现动静分离" class="headerlink" title="配置Nginx+Apache实现动静分离"></a>配置Nginx+Apache实现动静分离</h2><p>动静分离:</p><p>由Nginx提供对外访问,静态请求直接由Nginx处理,动态请求转交给Apache处理,这样就实现了动静分离。<br>动态请求是指该请求需要服务器端的程序处理。静态请求不需要程序处理,直接读取文件并返回即可。<br><img src="https://img-blog.csdnimg.cn/20200409133509694.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQzNDQyNTI0,size_16,color_FFFFFF,t_70" alt="在这里插入图片描述"></p><h3 id="启动进入容器centos-v1"><a href="#启动进入容器centos-v1" class="headerlink" title="启动进入容器centos:v1"></a>启动进入容器centos:v1</h3><p>1) 启动容器docker run -d –privileged centos:v1 /usr/sbin/init<br><img src="https://img-blog.csdnimg.cn/20200409132232847.png" alt="在这里插入图片描述"><br>2) 查看容器docker ps -a</p><p>3) 进入容器docker exec -it 容器ID /bin/bash</p><h3 id="使用yum方式安装apache和php"><a href="#使用yum方式安装apache和php" class="headerlink" title="使用yum方式安装apache和php"></a>使用yum方式安装apache和php</h3><p>1) 使用yum方式安装httpd</p><p>2) 使用yum方式安装php</p><p>3) 编辑/var/www/html/site.php文件<br><figure class="highlight php"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta"><?</span> php</span><br><span class="line"><span class="keyword">echo</span> “site2”;</span><br><span class="line"><span class="meta">?></span></span><br></pre></td></tr></table></figure></p><p>4) 重启httpd,netstat -antp查看监听端口<br> <img src="https://img-blog.csdnimg.cn/20200409132357678.png" alt="在这里插入图片描述"><br>5) 配置httpd自启动,systemctl enable httpd</p><p>6) 在虚拟机使用curl http://容器地址/site.php<br><img src="https://img-blog.csdnimg.cn/20200409132426899.png" alt="在这里插入图片描述">在虚拟机中保存容器,docker commit 容器ID php-apache</p><h3 id="配置nginx支持动静分离"><a href="#配置nginx支持动静分离" class="headerlink" title="配置nginx支持动静分离"></a>配置nginx支持动静分离</h3><p>1) 进入容器nginx</p><p>2) 编辑/etc/nginx/nginx.conf文件<br><figure class="highlight xml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line">server {</span><br><span class="line"> listen 80;</span><br><span class="line"> server_name localhost;</span><br><span class="line"> location / {</span><br><span class="line"> root /var/www;</span><br><span class="line"> index index.html;</span><br><span class="line"> }</span><br><span class="line"> location ~ \.php$ {</span><br><span class="line"> proxy_pass http://172.17.0.3;</span><br><span class="line"> proxy_set_header host $host;</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>3) 重新启动nginx服务<br>4) 在主机中使用浏览器访问http://虚拟机地址/site.php</p><p><img src="https://img-blog.csdnimg.cn/20200409132640385.png" alt="在这里插入图片描述"></p>]]></content>
<summary type="html">
<h2 id="实验环境"><a href="#实验环境" class="headerlink" title="实验环境"></a>实验环境</h2><p>vmware虚拟机双核2G内存以上<br>安装有CentOS7和docker</p>
<h2 id="配置nginx支持ph
</summary>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/categories/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="高可用负载均衡" scheme="https://plutoacharon.github.io/tags/%E9%AB%98%E5%8F%AF%E7%94%A8%E8%B4%9F%E8%BD%BD%E5%9D%87%E8%A1%A1/"/>
<category term="Docker" scheme="https://plutoacharon.github.io/tags/Docker/"/>
</entry>
</feed>
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。