基于CentOS使用kubeadm安装kubernetes_v1.18.x

一、简介

Kubernetes是Google 2014年创建管理的,是Google 10多年大规模容器管理技术Borg的开源版本。它是容器集群管理系统,是一个开源的平台,可以实现容器集群的自动化部署、自动扩缩容、维护等功能。

通过Kubernetes你可以:

  • 快速部署应用
  • 快速扩展应用
  • 无缝对接新的应用功能
  • 节省资源,优化硬件资源的使用

Kubernetes 特点:

  • 可移植: 支持公有云,私有云,混合云,多重云(multi-cloud)
  • 可扩展: 模块化, 插件化, 可挂载, 可组合
  • 自动化: 自动部署,自动重启,自动复制,自动伸缩/扩展

Kubernetes 项目的架构
由 Master 和 Node 两种节点组成
这两种角色分别对应着控制节点和计算节点。

控制节点:
即 Master 节点,由三个紧密协作的独立组件组合而成,分别是:
kube-apiserver: 负责 API 服务
kube-scheduler: 负责调度
kube-controller-manager: 负责容器编排
整个集群的持久化数据,则由 kube-apiserver 处理后保存在 Ectd 中

计算节点:
最核心的部分是 kubelet 的组件
kubelet 主要负责同容器运行时(比如 Docker 项目)打交道。这个交互依赖的是 CRI(Container Runtime Interface)的远程调用接口,这个接口定义了容器运行时的各项核心操作,比如:启动一个容器需要的所有参数。
k8s 项目并不关心你部署的是什么容器运行时、使用的什么技术实现,只要你的这个容器运行时能够运行标准的容器镜像,它就可以通过实现 CRI 接入到 k8s 项目当中。

二、环境配置要求
1.安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  • 一台或多台机器,操作系统 CentOS7.x-86_x64
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止swap分区
主机名 IP地址 角色 操作系统 CPU/MEM 平台
k8s-master 192.168.174.129 master CentOS7.4 2C/2G VMware
k8s-node1 192.168.174.130 node1 CentOS7.4 2C/2G VMware
k8s-node2 192.168.174.131 node2 CentOS7.4 2C/2G VMware
2.节点基本配置

在所有节点上配置

1)配置主机名并解析

#修改主机名
hostnamectl set-hostname yourhostname
bash   #使其生效

# 查看修改结果
hostnamectl status

#设置hostname解析
echo  "your ip  $(hostname)"  >> /etc/hosts

2)关闭防火墙

#关闭防火墙
systemctl stop firewalld
#设置不自启
systemctl disable firewalld

3)关闭selinux

#临时关闭
setenforce 0
#永久关闭
sed -i 's/enforcing/disabled/' /etc/selinux/config

4)关闭swap分区

#临时关闭
swapoff -a
#永久关闭
sed -i 's/.*swap/#&/' /etc/fstab

5)配置静态ip
设置静态IP,进行calico网络方案时,以固定IP

6)加载ipvs模块
默认情况下,Kube-proxy将在kubeadm部署的集群中以iptables模式运行

需要注意的是,当内核版本大于4.19时,移除了nf_conntrack_ipv4模块,kubernetes官方建议使用nf_conntrack代替,否则报错无法找到nf_conntrack_ipv4模块

yum install -y ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules

7)配置内核参数

# vim /etc/sysctl.d/kubernetes.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
​
# modprobe br_netfilter
# sysctl -p /etc/sysctl.d/kubernetes.conf

8)打开文件数

echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
三、安装docker及kubelet(在所有节点执行)
1.安装并配置docker

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。
1)配置yum源

# cd /etc/yum.repos.d/
# curl -O http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

2)安装docker

yum -y install docker-ce

3)配置docker

# mkdir /etc/docker
# vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://pf5f57i3.mirror.aliyuncs.com"]
}

4)启动docker

systemctl enable docker
systemctl start docker
2.安装kubelet、kubeadm、kubectl

1)配置k8s的yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2)安装kubelet、kubeadm、kubectl
默认安装最新版本,此处指定为1.18.4,根据需求更改版本

yum install -y kubelet-1.18.4 kubeadm-1.18.4 kubectl-1.18.4

3.配置docker.service文件并重启docker,启动kubelet
3)修改docker Cgroup Driver为systemd

# # 将/usr/lib/systemd/system/docker.service文件中的这一行 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
# # 修改为 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
# 如果不修改,在添加 worker 节点时可能会碰到如下错误
# [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". 
# Please follow the guide at https://kubernetes.io/docs/setup/cri/
sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service

4)重启docker,并启动kubelet

systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet && systemctl start kubelet

此时,有的朋友可能会出现docker重启起不来,我百度谷歌了之后,有说json文件不对的,有说要更新系统的,前者我都试了没有用,最后有一篇文章说把daemon.json改成daemon.conf,就可以了。

四、初始化Master节点(仅master节点)
1.开始初始化
kubeadm init --kubernetes-version=1.18.4  \
--apiserver-advertise-address=192.168.174.129   \
--image-repository mirrorgcrio  \
--service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

POD的网段为: 10.122.0.0/16, api server地址就是master本机IP。
使用kubeadm config print init-defaults可以打印集群初始化默认的使用的配置。
这里采用命令行方式初始化,由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定mirrorgcrio。 根据您服务器网速的情况,您需要等候 3 - 10 分钟。

集群初始化成功返回如下信息:

W0623 15:38:10.822265   93979 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 192.168.174.129]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.174.129 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.174.129 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0623 15:44:13.593752   93979 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0623 15:44:13.599325   93979 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.514485 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ydy3g3.1lahkvfvm1qyoy86
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.174.129:6443 --token ydy3g3.1lahkvfvm1qyoy86 \
    --discovery-token-ca-cert-hash sha256:7c43918ee287d21fe9b70e4868e2e0fdd8c5f6b829a825822aecdb8d207494fc

记录生成的最后部分内容,此块内容需要在node节点上加入k8s集群的时候执行。

2.配置kubectl

方式一,通过配置文件

 mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

方式二,通过环境变量

echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc
source ~/.bashrc
3.查看节点
[root@k8s-master ~]# kubectl get node
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   5m22s   v1.18.4
[root@k8s-master ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-54f99b968c-s85xv             0/1     Pending   0          5m15s
kube-system   coredns-54f99b968c-wmffs             0/1     Pending   0          5m15s
kube-system   etcd-k8s-master                      1/1     Running   0          5m31s
kube-system   kube-apiserver-k8s-master            1/1     Running   0          5m31s
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          5m31s
kube-system   kube-proxy-n8h22                     1/1     Running   0          5m15s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          5m31s

此时,node节点为NotReady,因为corednspod没有启动,缺少网络pod

五、安装calico网络(仅在master节点操作)

kubernetes支持多种网络方案,我这里安装的calico。

1.用kubectl命令安装
[root@k8s-master ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

第一次部署的时候失败了,使用kubectl describe pod [podname] -n [namespace]查看详细信息,得知找不到镜像,然后查看yaml文件里面的calico的版本,手动拉取下来,重新部署就成功了。

2.再次查看node和pod状态

查看pod和node

[root@k8s-master ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-58b656d69f-zf6kz   1/1     Running   0          24m
kube-system   calico-node-9hbcx                          1/1     Running   0          24m
kube-system   coredns-54f99b968c-fp6qd                   1/1     Running   0          33m
kube-system   coredns-54f99b968c-s85xv                   1/1     Running   0          72m
kube-system   etcd-k8s-master                            1/1     Running   0          72m
kube-system   kube-apiserver-k8s-master                  1/1     Running   0          72m
kube-system   kube-controller-manager-k8s-master         1/1     Running   0          72m
kube-system   kube-proxy-n8h22                           1/1     Running   0          72m
kube-system   kube-scheduler-k8s-master                  1/1     Running   0          72m
[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   72m   v1.18.4

此时集群状态正常

六、部署node节点
1.加入node节点

用我们上面初始化master节点最后输出的命令,如果忘记了,可以通过kubeadm token create --print-join-command来获取。

[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.174.129:6443 --token a95vmc.yy4p8btqoa7e5dwd     --discovery-token-ca-cert-hash sha256:7c43918ee287d21fe9b70e4868e2e0fdd8c5f6b829a825822aecdb8d207494fc

然后在两个node节点上执行

[root@k8s-node1 ~]# kubeadm join 192.168.174.129:6443 --token 46q0ei.ivbs1u1n2a3tayma     --discovery-token-ca-cert-hash sha256:7c43918ee287d21fe9b70e4868e2e0fdd8c5f6b829a825822aecdb8d207494fc
W0623 17:26:34.473791   81323 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

加入后等一会查看节点状态

2.查看节点状态
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   116m   v1.18.4
k8s-node1    Ready    <none>   14m    v1.18.4
k8s-node2    Ready    <none>   13m    v1.18.4

待所有节点都ready后, 集群部署完成。

七.kube-proxy开启ipvs(在master节点执行就行)

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”

kubectl edit cm kube-proxy -n kube-system

之后重启各个节点上的kube-proxy pod:

kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 160,646评论 4 366
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,979评论 1 301
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 110,391评论 0 250
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 44,356评论 0 215
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,740评论 3 293
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,836评论 1 224
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 32,022评论 2 315
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,764评论 0 204
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,487评论 1 246
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,728评论 2 252
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,200评论 1 263
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,548评论 3 260
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,217评论 3 241
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,134评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,921评论 0 201
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,919评论 2 283
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,766评论 2 274