泰坦尼克号沉没是历史上最臭名昭着的沉船之一。1912年4月15日,在她的处女航中,泰坦尼克号在与冰山相撞后沉没,在2224名乘客和机组人员中造成1502人死亡。这场耸人听闻的悲剧震惊了国际社会,并为船舶制定了更好的安全规定。 造成海难失事的原因之一是乘客和机组人员没有足够的救生艇。尽管幸存下沉有一些运气因素,但有些人比其他人更容易生存,例如妇女,儿童和上流社会。 在这个案例中,我们要求您完成对哪些人可能存活的分析。特别是,我们要求您运用机器学习工具来预测哪些乘客幸免于悲剧。
案例:https://www.kaggle.com/c/titanic/overview
我们提取到的数据集中的特征包括票的类别,是否存活,乘坐班次,年龄,登陆home.dest,房间,船和性别等。
经过观察数据得到:
import pandas as pd import numpy as np from sklearn.feature_extraction import DictVectorizer from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier, export_graphviz
# 可以通过github上下载数据 titanic=pd.read_csv("data/titanic/train.csv") titanic
# 2.1 确定特征值,目标值 x = titan[["pclass", "age", "sex"]] y = titan["survived"] # 2.2 缺失值处理 # 缺失值需要处理,将特征当中有类别的这些特征进行字典特征抽取 x['age'].fillna(x['age'].mean(), inplace=True) # 2.3 数据集划分 x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=22)
特征中出现类别符号,需要进行one-hot编码处理(DictVectorizer),x.to_dict(orient="records") 需要将数组特征转换成字典数据
# 对于x转换成字典数据x.to_dict(orient="records") # [{"pclass": "1st", "age": 29.00, "sex": "female"}, {}] # 转换为字典的形式 x_train=x_train.to_dict(orient="records") x_test=x_test.to_dict(orient="records") # 特征转换 transfer = DictVectorizer(sparse=False) x_train = transfer.fit_transform(x_train.to_dict(orient="records")) x_test = transfer.fit_transform(x_test.to_dict(orient="records"))
决策树API当中,如果没有指定max_depth那么会根据信息熵的条件直到最终结束。这里我们可以指定树的深度来进行限制树的大小
# 4.机器学习(决策树) estimator = DecisionTreeClassifier(criterion="entropy", max_depth=5) estimator.fit(x_train, y_train)
# 5.模型评估 estimator.score(x_test, y_test) estimator.predict(x_test)
sklearn.tree.export_graphviz()
# 6.决策树可视化 export_graphviz(estimator, out_file="./data/tree.dot", feature_names=['Age','Pclass','Sex','Survived'])
dot文件当中的内容如下
digraph Tree { node [shape=box] ; 0 [label="petal length (cm) <= 2.45\nentropy = 1.584\nsamples = 112\nvalue = [39, 37, 36]"] ; 1 [label="entropy = 0.0\nsamples = 39\nvalue = [39, 0, 0]"] ; 0 -> 1 [labeldistance=2.5, labelangle=45, headlabel="True"] ; 2 [label="petal width (cm) <= 1.75\nentropy = 1.0\nsamples = 73\nvalue = [0, 37, 36]"] ; 0 -> 2 [labeldistance=2.5, labelangle=-45, headlabel="False"] ; 3 [label="petal length (cm) <= 5.05\nentropy = 0.391\nsamples = 39\nvalue = [0, 36, 3]"] ; 2 -> 3 ; 4 [label="sepal length (cm) <= 4.95\nentropy = 0.183\nsamples = 36\nvalue = [0, 35, 1]"] ; 3 -> 4 ; 5 [label="petal length (cm) <= 3.9\nentropy = 1.0\nsamples = 2\nvalue = [0, 1, 1]"] ; 4 -> 5 ; 6 [label="entropy = 0.0\nsamples = 1\nvalue = [0, 1, 0]"] ; 5 -> 6 ; 7 [label="entropy = 0.0\nsamples = 1\nvalue = [0, 0, 1]"] ; 5 -> 7 ; 8 [label="entropy = 0.0\nsamples = 34\nvalue = [0, 34, 0]"] ; 4 -> 8 ; 9 [label="petal width (cm) <= 1.55\nentropy = 0.918\nsamples = 3\nvalue = [0, 1, 2]"] ; 3 -> 9 ; 10 [label="entropy = 0.0\nsamples = 2\nvalue = [0, 0, 2]"] ; 9 -> 10 ; 11 [label="entropy = 0.0\nsamples = 1\nvalue = [0, 1, 0]"] ; 9 -> 11 ; 12 [label="petal length (cm) <= 4.85\nentropy = 0.191\nsamples = 34\nvalue = [0, 1, 33]"] ; 2 -> 12 ; 13 [label="entropy = 0.0\nsamples = 1\nvalue = [0, 1, 0]"] ; 12 -> 13 ; 14 [label="entropy = 0.0\nsamples = 33\nvalue = [0, 0, 33]"] ; 12 -> 14 ; }