1 Star 0 Fork 0

Briefly/rldemo_paper_code

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
Copy_of_createPPONetworks.m 1.05 KB
一键复制 编辑 原始数据 按行查看 历史
Briefly 提交于 2023-09-10 17:56 . 添加所有文件
criticNetwork = [
featureInputLayer(numObservations,'Normalization','none','Name','observations')
fullyConnectedLayer(64,'Name','fc1')
reluLayer('Name','relu1')
fullyConnectedLayer(32,'Name','fc2')
reluLayer('Name','relu2')
fullyConnectedLayer(16, 'Name','fc3')
fullyConnectedLayer(1,'Name','fc4')];
criticOptions = rlRepresentationOptions('LearnRate',1e-3,'GradientThreshold',1);
critic = rlValueRepresentation(criticNetwork,observationInfo,...
'Observation',{'observations'},criticOptions);
actorNetwork = [
featureInputLayer(numObservations,'Normalization','none','Name','observations')
fullyConnectedLayer(64,'Name','fc1')
reluLayer('Name','relu1')
fullyConnectedLayer(32,'Name','fc2')
reluLayer('Name','relu2')
fullyConnectedLayer(numActions, 'Name', 'out')
softmaxLayer('Name','actionProb')];
actorOptions = rlRepresentationOptions('LearnRate',2e-4,'GradientThreshold',1);
actor = rlStochasticActorRepresentation(actorNetwork,observationInfo,actionInfo,...
'Observation',{'observations'},actorOptions);
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/briefly/rldemo_paper_code.git
git@gitee.com:briefly/rldemo_paper_code.git
briefly
rldemo_paper_code
rldemo_paper_code
master

搜索帮助