600字范文,内容丰富有趣,生活中的好帮手!
600字范文 > 吴恩达深度学习第一课--第二周神经网络基础作业下代码实现

吴恩达深度学习第一课--第二周神经网络基础作业下代码实现

时间:2018-06-24 19:56:58

相关推荐

吴恩达深度学习第一课--第二周神经网络基础作业下代码实现

文章目录

需要的库文件步骤取出训练集、测试集了解训练集、测试集查看图片数据维度处理标准化数据定义sigmoid函数初始化参数定义前向传播函数、代价函数及梯度下降优化部分预测部分模型的整合--合并 优化和预测模块完整代码

需要的库文件

import numpy as np #矩阵运算import matplotlib.pyplot as plt #绘图import h5py #读取h5文件

步骤

按照如下步骤来实现Logistic:

1、定义模型结构

2、初始化模型参数

3、循环

3.1、前向传播

3.2、计算损失

3.3、后向传播

3.4、更新参数

4、整合成为一个完整的模型

取出训练集、测试集

train_dataset = h5py.File('D:\\0112zhaohuan\\software\\pycharm\\code\\hello\\train_catvnoncat.h5', 'r')test_dataset = h5py.File('D:\\0112zhaohuan\\software\\pycharm\\code\\hello\\test_catvnoncat.h5', 'r')

了解训练集、测试集

for key in train_dataset:print(key)print(train_dataset['train_set_x'].shape)print(train_dataset['train_set_y'].shape)#print(train_dataset['train_set_x'][:])print(test_dataset['test_set_x'].shape)print(test_dataset['test_set_y'].shape)#print(test_dataset['test_set_x'][:])

得出训练集数据维度是(209,64,64,3)、测试集数据维度是(50,64,64,3)。训练集标签维度是(209,),测试集标签维度是(50,)

查看图片

plt.imshow(train_data_org[176])

数据维度处理

m_train = train_data_org.shape[0] #得到样本m=209m_text = text_data_org.shape[0] #得到样本m=50train_data_tran = train_data_org.reshape(m_train,-1).Ttest_data_tran = text_data_org.reshape(m_text,-1).Tprint(train_data_tran.shape,test_data_tran.shape)train_lables_tran = train_lables_org[np.newaxis,:]test_lables_tran = text_lables_org[np.newaxis,:]print(train_lables_tran.shape,test_lables_tran.shape)

得出训练集数据维度是(12288,209),测试集数据维度是(12288,50)。训练集标签维度是(1,209),测试集标签维度是(1,50)。

标准化数据

因为每个值在0~255之间,将所有值除以255,这样每个值之间的差距就非常小。

train_data_sta = train_data_tran/255test_data_sta = test_data_tran/255

定义sigmoid函数

sigmoid函数:a=δ(z)=11+e−z\delta(z)=\frac{1}{1+e^{-z}}δ(z)=1+e−z1​

def sigmoid(z):#输入z,得到经过sigmoid后的值aa = 1/(1+np.exp(-z))return a

初始化参数

n_dim = train_data_sta.shape[0] #得到特征长度nprint(n_dim) #12288w = np.zeros((n_dim,1)) #初始化nx1维矩阵wb = 0 #初始化实数b

定义前向传播函数、代价函数及梯度下降

前向传播函数:Z=wTX+bw^TX+bwTX+b;A=sigmoid(Z)

代价函数:J=−1m∑i=1m[Y∗logA+(1−Y)∗log(1−A)]-\frac{1}{m}\sum_{i=1}^m[Y*logA+(1-Y)*log(1-A)]−m1​∑i=1m​[Y∗logA+(1−Y)∗log(1−A)]

梯度下降:dw=∂L(A,Y)∂w=1mX(A−Y)T;db=∂L(A,Y)∂b=1mnp.sum(A−Y);W=W−αdw;b=b−αdbdw=\frac{\partial L(A,Y)}{\partial w}=\frac{1}{m}X(A-Y)^T;db=\frac{\partial L(A,Y)}{\partial b}=\frac{1}{m}np.sum(A-Y);W=W-\alpha dw;b=b-\alpha dbdw=∂w∂L(A,Y)​=m1​X(A−Y)T;db=∂b∂L(A,Y)​=m1​np.sum(A−Y);W=W−αdw;b=b−αdb

def propagate(w,b,X,Y):#输入:权重w,实数b,数据集x,标签y。#输出:对w的偏导dw,对b的偏导db,将其装进字典grands。#返回:grands和代价值j#1.前向传播函数z = np.dot(w.T,X)+bA = sigmoid(z)#2.代价函数m = X.shape[1] #样本数mJ = -1/m * np.sum(Y*np.log(A)+(1-Y)*np.log(1-A))#3.梯度下降dw = 1/m * np.dot(X,(A-Y).T)db = 1/m * np.sum(A-Y)grands = {'dw':dw,'db':db}return grands,J

优化部分

def optimize(w,b,X,y,alpha,n_iters):#输入:权重w,实数b,数据集x,标签集y,学习效率alpha,特征数n_iters=12288#输出:对w的偏导dw,对b的偏导db,将其装进字典grands。修正后的w和b,将其装进字典params。每隔100记录一次代价值,将其装进数组costs。#返回:grands、params、costs。costs = [] #保存代价函数值,为了后面绘出代价函数图形for i in range(n_iters):grands,J = propagate(w,b,X,y)dw = grands['dw']db = grands['db']w = w - alpha * dwb = b - alpha * dbif i % 100 == 0:costs.append(J)print('n_iters is ',i,',cost is ',J)grands = {'dw': dw, 'db': db}params = {'w': w, 'b': b}return grands,params,costs

预测部分

def predict(w,b,X_test):#输入:权重w,实数b,测试数据x_test#输出:预测矩阵y_pred(值要么为1,要么为0)z = np.dot(w.T, X_test) + bA = sigmoid(z)m = X_test.shape[1]y_pred = np.zeros((1,m)) #初始化维度为1xm的y_pred矩阵for i in range(m):if A[:,i] > 0.5: y_pred[:,i] = 1else:y_pred[:,i] = 0return y_pred

模型的整合–合并 优化和预测模块

def model(w,b,X_train,Y_train,X_test,Y_test,alpha,n_iters):#输入:权重w,实数b,训练数据集X_train,训练标签集Y_train,测试数据集X_test,测试标签集Y_test,学习效率alpha,特征值n_iters=12288。#输出:数据字典bgrands,params,costs = optimize(w, b, X_train, Y_train, alpha, n_iters)w = params['w']b = params['b']y_pred_train = predict(w,b,X_train)y_pred_test = predict(w,b,X_test)print('the train acc is ', np.mean(y_pred_train == Y_train)*100,'%')print('the test acc is ', np.mean(y_pred_test == Y_test)*100,'%')b={'w':w,'b':b,'costs':costs,'y_pred_train':y_pred_train,'y_pred_test':y_pred_test,'alpha':alpha} #用数据字典返回,格式类似jsonreturn b

完整代码

import numpy as npimport matplotlib.pyplot as pltimport h5pyfrom skimage import transform # 用来调整自己图片的大小# 取出训练集、测试集train_dataset = h5py.File('D:\\0112zhaohuan\\software\\pycharm\\code\\hello\\train_catvnoncat.h5','r')test_dataset = h5py.File('D:\\0112zhaohuan\\software\\pycharm\\code\\hello\\test_catvnoncat.h5','r')# 了解训练集、测试集的keyfor key in train_dataset:print(key)# list_classes# train_set_x# train_set_yfor key in test_dataset:print(key)# list_classes# test_set_x# test_set_y# 取出训练数据集train_data_org、训练标签集train_labels_org、测试数据集test_data_org、测试标签集test_labels_orgtrain_data_org = train_dataset['train_set_x'][:]train_labels_org = train_dataset['train_set_y'][:]test_data_org = test_dataset['test_set_x'][:]test_labels_org = test_dataset['test_set_y'][:]print(train_data_org.shape,train_labels_org.shape,test_data_org.shape,test_labels_org.shape)# 查看图片plt.imshow(train_data_org[150])# 数据维度处理m_train = train_data_org.shape[0] # 209m_test = test_data_org.shape[0] # 50print(m_train,m_test)train_data_tran = train_data_org.reshape(m_train,-1).Ttest_data_tran = test_data_org.reshape(m_test,-1).Ttrain_lables_tran = train_labels_org[np.newaxis,:]test_lables_tran = test_labels_org[np.newaxis,:]print(train_data_tran.shape,test_data_tran.shape,train_lables_tran.shape,test_lables_tran.shape)# 标准化数据train_data_sta = train_data_tran/255test_data_sta = test_data_tran/255# 定义sigmoid函数def sigmoid(z):a = 1/(1+np.exp(-z))return a# 初始化参数w,bn_iters = train_data_sta.shape[0] #12288w = np.zeros((n_iters,1))b = 0# 定义前向传播函数、代价函数及梯度下降。def propagate(w,b,X,Y):# 输入:权重w,实数b,数据集x,标签集y。# 输出:对w的偏导dw,对b的偏导db,将其装进字典grands。# 返回:grands和代价值j# 1.前向传播函数Z = np.dot(w.T,X)+bA = sigmoid(Z)# 2.代价函数m = X.shape[1]J = -1/m * np.sum(Y * np.log(A) + (1-Y)*np.log(1-A))# 3.梯度下降dw = 1/m*np.dot(X,(A-Y).T)db = 1/m*np.sum(A-Y)grands={'dw':dw,'db':db}return grands,J# 优化部分def optimize(w,b,X,Y,alpha,n_iters):# 输入:权重w,实数b,数据集X,标签集Y,学习效率alpha,特征数n_iters=12288# 输出:对w的偏导dw,对b的偏导db,将其装进字典grands。修正后的w和b,将其装进字典params。每隔100记录一次代价值,将其装进数组costs。# 返回:grands、params、costscosts=[]for i in range(n_iters):grands, J = propagate(w, b, X, Y)dw = grands['dw']db = grands['db']w = w - alpha*dwb = b - alpha*dbif i%100==0:costs.append(J)print('n_iters is ',i,'cost is',J)grands = {'dw':dw,'db':db}paras = {'w':w,'b':b}return grands,paras,costs# 预测部分def predict(w,b,X):# 输入:权重w,实数b,数据集X# 输出:预测矩阵y_pred(值要么为1,要么为0)Z = np.dot(w.T, X) + bA = sigmoid(Z)m = X.shape[1]y_pred = np.zeros((1,m))for i in range(m):if A[:,i]>0.5:y_pred[:,i]=1else:y_pred[:,i]=0return y_pred# 模型的整合def model(w,b,X_train,Y_train,X_test,Y_test,alpha,n_iters):# 输入:权重w,实数b,训练数据集X_train,训练标签集Y_train,测试数据集X_test,测试标签集Y_test,学习效率alpha,特征值n_iters=12288。# 输出:数据字典rsgrands,paras,costs = optimize(w, b, X_train, Y_train, alpha, n_iters)w = paras['w']b = paras['b']y_pred_train = predict(w,b,X_train)y_pred_test = predict(w,b,X_test)print('the train acc is ', np.mean(y_pred_train == Y_train)*100,'%')print('the test acc is ', np.mean(y_pred_test == Y_test)*100,'%')rs = {'w':w,'b':b,'costs':costs,'y_pred_train':y_pred_train,'y_pred_test':y_pred_test,'alpha':alpha}return rsrs = model(w,b,train_data_sta,train_lables_tran,test_data_sta,test_lables_tran,alpha=0.005,n_iters=2000)plt.plot(rs['costs'])plt.xlabel('per hundred iters')plt.ylabel('cost')# 自我检测一下index = 2print('y is ',train_lables_tran[0,index])print('y_pred is ',rs['y_pred_train'][0,index])plt.imshow(train_data_org[index])# 用自己电脑里面的图片来检测一下from skimage import transformfname = 'D:\\0112zhaohuan\\tim.jpg'image = plt.imread(fname,'rb')plt.imshow(image)print(image.shape)image_tran = transform.resize(image,(64,64,3)).reshape(64*64*3,1)print(image_tran.shape)y = predict(rs['w'],rs['b'],image_tran)print(int(y))

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。