代码拉取完成,页面将自动刷新
saving plot: 2020-01-28-13-38-32_Sim-Stack-Trial-Reward-Training-Sim-Stack-Trial-Reward-Training_success_plot.png
export CUDA_VISIBLE_DEVICES="1" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --tcp_port 19980 --place --future_reward_discount 0.65 --nn densenet
STACKING COMMON SENSE DENSENET, trial reward, check_z_height
-------------------------------------------------------------
costar@costar-desktop|~/src/real_good_robot on fast_sim_thread!?
export CUDA_VISIBLE_DEVICES="0" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65
GPU 0, Tab 0, port 19990, commit 2353c4a9ca39438eca18855b8da68d64a7258706
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-01-30-13-41-13_Sim-Stack-Trial-Reward-Common-Sense-Training
STACKING, DENSENET, NO COMMON SENSE, trial reward, check_z_height
-------------------------------------------------------------
GPU 1, Tab 1, port 19998, commit 2353c4a9ca39438eca18855b8da68d64a7258706
export CUDA_VISIBLE_DEVICES="1" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --check_z_height --tcp_port 19998 --place --future_reward_discount 0.65
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-01-30-13-42-19_Sim-Stack-Trial-Rewa
rd-Training
STACKING COMMON SENSE DENSENET, trial reward, check_z_height
-------------------------------------------------------------
costar@costar-desktop|~/src/real_good_robot on fast_sim_thread!?
export CUDA_VISIBLE_DEVICES="0" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65
GPU 0, Tab 0, port 19990, commit 0e0a4749fc5560d64e3129d1f269fc5fc7e0dc32
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-01-30-13-41-13_Sim-Stack-Trial-Reward-Common-Sense-Training
Experience replay update, now alternates training on past success and failure for the current action
=====================================================================================================
note: alternating slowed down learning and has thus been removed
************************************************************
STACKING, DENSENET, NO COMMON SENSE, trial reward, check_z_height -- ABSOLUTE BEST RUN AS OF 2020-02-02
-------------------------------------------------------------
GPU 1, Tab 1, port 19998, commit b3661e21bf715f93f23833583e6ee5e9ffb607aa
export CUDA_VISIBLE_DEVICES="1" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --check_z_height --tcp_port 19998 --place --future_reward_discount 0.65
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-01-18-56-14_Sim-Stack-Trial-Reward-Training
stats for training:
Max grasp success rate: 0.9424603174603174, at action iteration: 13453. (total of 15651 actions, max excludes first 1000 actions)
Max place success rate: 0.79957805907173, at action iteration: 14487. (total of 15652 actions, max excludes first 1000 actions)
Max action efficiency: 0.582, at action iteration: 12444. (total of 15652 actions, max excludes first 1000 actions)
Max trial success rate: 0.7904761904761904, at action iteration: 12989. (total of 15651 actions, max excludes first 1000 actions)
************************************************************
STACKING COMMON SENSE DENSENET, trial reward, check_z_height -- This Did very well but not quite as well as above
-------------------------------------------------------------
GPU 0, Tab 0, port 19990, commit b3661e21bf715f93f23833583e6ee5e9ffb607aa
export CUDA_VISIBLE_DEVICES="0" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-01-18-38-22_Sim-Stack-Trial-Reward-Common-Sense-Training
STACKING COMMON SENSE DENSENET, trial reward, check_z_height -- COMMON SENSE TRAINS ARGMAX VALUE 0 WHEN IT IS A GEOMETRICALLY KNOWN FAILURE
-------------------------------------------------------------
GPU 0, Tab 0, port 19990, commit bfeaf0326812af89093a72c97e2e43506cb9ef4c, "main.py trainer.py utils_torch.py COMMON SENSE TRAIN GEOMETRIC ARGMAX FAILURE"
export CUDA_VISIBLE_DEVICES="0" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-03-11-49-57_Sim-Stack-Trial-Reward-Common-Sense-Training
SPOT REWARD UPDATE, spot reward now gives full double credit to final time step of a successful trial, with no explore_rate_decay
=================================================================================================================================
STACKING, DENSENET, NO COMMON SENSE, trial reward, check_z_height, IMPORTANT GOES IN PAPER
-------------------------------------------------------------
GPU 1, Tab 1, port 19998, commit 786a5fc256a8c9eb1b1edf39f6a4f3ce274dd455, "trainer.py MAJOR SPOT REWARD CHANGE, 2X FINAL REWARD ON LAST TIMESTEP
export CUDA_VISIBLE_DEVICES="1" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --check_z_height --tcp_port 19998 --place --future_reward_discount 0.65
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-03-16-58-06_Sim-Stack-Trial-Reward-Training
> TESTING RUN Random Arrangements
> Commit: e6583b8e7ed093887b8f08261683a2220c374bdd
> export CUDA_VISIBLE_DEVICES="1" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --check_z_height --tcp_port 19998 --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --snapshot_file '/home/costar/src/real_good_robot/logs/2020-02-03-16-58-06_Sim-Stack-Trial-Reward-Training/models/snapshot.reinforcement-best-stack-rate.pth'
> Pre-trained model snapshot loaded from: /home/costar/src/real_good_robot/logs/2020-02-03-16-58-06_Sim-Stack-Trial-Reward-Training/models/snapshot.reinforcement-best-stack-rate.pth
> Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-20-18-00-10_Sim-Stack-SPOT-Trial-Reward-Testing
> Video: recording_2020_02_20-17_59-55.avi
> Results: {'trial_success_rate_best_value': 0.97, 'trial_success_rate_best_index': 1567, 'grasp_success_rate_best_value': 0.8146002317497103, 'grasp_success_rate_best_index': 1567, 'place_success_rate_best_value': 0.7886524822695036, 'place_success_rate_best_index': 1567, 'action_efficiency_best_value': 0.3752393107849394, 'action_efficiency_best_index': 1569}
STACKING COMMON SENSE DENSENET, trial reward, check_z_height -- COMMON SENSE TRAINS ARGMAX VALUE 0 WHEN IT IS A GEOMETRICALLY KNOWN FAILURE + double credit
-------------------------------------------------------------
GPU 0, Tab 0, port 19990, commit 786a5fc256a8c9eb1b1edf39f6a4f3ce274dd455, "main.py trainer.py utils_torch.py COMMON SENSE TRAIN GEOMETRIC ARGMAX FAILURE" + "trainer.py MAJOR SPOT REWARD CHANGE, 2X FINAL REWARD ON LAST TIMESTEP
export CUDA_VISIBLE_DEVICES="0" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-03-16-57-28_Sim-Stack-Trial-Reward-Common-Sense-Training
> TESTING RUN
> export CUDA_VISIBLE_DEVICES="0" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --tcp_port 19990 --place --future_reward_discount 0.65 --nn densenet --check_z_height --is_testing --random_seed 1238 --snapshot_file '/home/costar/src/real_good_robot/logs/2020-02-03-16-57-28_Sim-Stack-Trial-Reward-Common-Sense-Training/models/snapshot-backup.reinforcement-best-stack-rate.pth'
> Commit: bfea389e37b7205dc54bf5dd2357eb658a0c3527
> Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-16-22-38-38_Sim-Stack-SPOT-Trial-Reward-Testing
> VIDEO: recording_2020_02_16-22_40-23.avi
> GPU 0, Tab 0, port 19990
PUSHING AND GRASPING WITH ALL FEATURES & STUCK OBJECT FIXES - Feb 7
--------------------------------------
± export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/toys --num_obj 10 --push_rewards --experience_replay --explore_rate_decay --common_sense --trial_reward --save_visualizations --future_reward_discount 0.65 --tcp_port 19998
Commit: 5c78490ae6f25cc257ac5fa2030118bc0644e9e8
logging session: logs/2020-02-07-14-43-44_Sim-Push-and-Grasp-Trial-Reward-Common-Sense-Training
GPU 0, Tab 0, port 19998, right v-rep window
> Preset testing run
> Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-11-15-59-07_Sim-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Testing
> export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir 'objects/toys' --num_obj 10 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --future_reward_discount 0.65 --tcp_port 19990 --is_testing --random_seed 1238 --snapshot_file '/home/costar/src/real_good_robot/logs/2020-02-07-14-43-44_Sim-Push-and-Grasp-Trial-Reward-Common-Sense-Training/models/snapshot.reinforcement.pth' --max_test_trials 10 --test_preset_cases
> Commit: 7b6c54ad615d592d86e71d90ea36c6478193a456
> GPU 1, tab 13, port 19999, left v-rep window
STACKING COMMON SENSE DENSENET WITH RANDOM PLACEMENTS OF OBJECTS STUCK TO GRIPPER DUE TO SIMULATOR BUGS
----------------------------------------
± export CUDA_VISIBLE_DEVICES="1" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65
commit: 5c78490ae6f25cc257ac5fa2030118bc0644e9e8
Comment: we manually uncommented the PixelNet() call in trainer.py and commented reinforcement_net()
RESUME with no two step backprop: ± export CUDA_VISIBLE_DEVICES="2" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --place --tcp_port 20000 --future_reward_discount 0.65 --max_train_actions 10000 --nn efficientnet --disable_two_step_backprop --random_actions --resume '/home/ahundt/src/real_good_robot/logs/2020-05-04-12-08-15_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training'
RESUME commit: 3ba0b91c5accac6387345c62d5a4e8b7ff9769cd
logging session: logs/2020-02-07-14-35-59_Sim-Stack-Trial-Reward-Common-Sense
GPU 1, Tab 1, port 19990, left v-rep window
SIM STACKING, ANY BLOCK, COMMON SENSE, DENSENET, SPOT TRIAL REWARD
---------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python main.py --is_sim --obj_mesh_dir objects/toys --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --tcp_port 19998 --place --future_reward_discount 0.65 --nn densenet --common_sense --check_z_height
commit: 7b6c54ad615d592d86e71d90ea36c6478193a456
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-10-19-09-09_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training
GPU 0, Tab 9, port 19998, right v-rep window
> Random Testing Any block
> export CUDA_VISIBLE_DEVICES="0" && python main.py --is_sim --obj_mesh_dir objects/toys --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --tcp_port 19990 --place --future_reward_discount 0.65 --nn densenet --common_sense --check_z_height --random_seed 1238 --is_testing --max_test_trials 50 --snapshot_file '/media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-10-19-09-09_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training/models/snapshot.reinforcement-best-stack-rate.pth'
> Commit: e6583b8e7ed093887b8f08261683a2220c374bdd
> Video: recording_2020_02_20-19_14-02.avi
> Pre-trained model snapshot loaded from: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-10-19-09-09_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training/models/snapshot.reinforcement-best-stack-rate.pth
> Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-20-19-14-08_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Testing
> Testing Results: {'trial_success_rate_best_value': 0.24, 'trial_success_rate_best_index': 2680, 'grasp_success_rate_best_value': 0.554670528602462, 'grasp_success_rate_best_index': 2680, 'place_success_rate_best
_value': 0.4846153846153846, 'place_success_rate_best_index': 2680, 'action_efficiency_best_value': 0.03805970149253731, 'action_efficiency_best_index': 2680}
> GPU 0, Tab 8, port 19990, left v-rep window,
PUSHING AND GRASPING WITH ALL FEATURES & SAVE ALL MODELS ACCORDING TO BEST STATS - Feb 12
--------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/toys --num_obj 10 --push_rewards --experience_replay --explore_rate_decay --common_sense --trial_reward --save_visualizations --future_reward_discount 0.65 --tcp_port 19990
Commit: 22f63b9eea28bbeaaf31930e9731cc7b17b43c35
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-12-17-58-04_Sim-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training
GPU 1, Tab 14, port 19990, left v-rep window
REAL ROBOT PUSHING AND GRASPING COMMON SENSE SPOT - NICK aborted FEB 13
---------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --future_reward_discount 0.65
commit: 5d59f747024e92918d3e8403ee816e9f86d5352b
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-13-15-16-38_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training
GPU 1: Tab 0, port N/A, Real Robot
^^^ NOTE: not sure what went wrong. things stopped showing up in the camera. will come back later to check.
the visualizations folder is still there.
PUSHING AND GRASPING WITH ALL FEATURES & SAVE ALL MODELS ACCORDING TO BEST STATS - Feb 13
--------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/toys --num_obj 10 --push_rewards --experience_replay --explore_rate_decay --common_sense --trial_reward --save_visualizations --future_reward_discount 0.65 --tcp_port 19990
Commit: 2b55d4b48c2c6fa1959e52947691b26355aa4180
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-13-18-38-34_Sim-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training
GPU 0, Tab 0, port 19990, left v-rep window
SIM STACKING, COMMON SENSE, DENSENET, SPOT TRIAL REWARD - Feb 13 - Critical bugfix to place experience replay
---------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --tcp_port 19998 --place --future_reward_discount 0.65 --nn densenet --common_sense --check_z_height
commit: 2b55d4b48c2c6fa1959e52947691b26355aa4180
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-13-19-02-32_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training
GPU 1, Tab 1, port 19998, right v-rep window
REAL ROBOT PUSHING AND GRASPING COMMON SENSE, SPOT - FEB 14 - LONG AND GOOD RUN
=================================================================================
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --future_reward_discount 0.65
commit: d5e28bcac0dc41d3d41e7f7d538f91bab73c69f8
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-14-13-08-21_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training
Resume command: export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --future_reward_discount 0.65 --resume '/home/costar/src/real_good_robot/logs/2020-02-14-13-08-21_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training'
GPU 1, Tab 0, port N/A, Real Robot
SIM STACKING, COMMON SENSE, DENSENET, SPOT TRIAL REWARD - Feb 14 - Critical bugfix to place experience replay, plotting
---------------------------------------------------------
commit: d5e28bcac0dc41d3d41e7f7d538f91bab73c69f8
export CUDA_VISIBLE_DEVICES="1" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --tcp_port 19998 --place --future_reward_discount 0.65 --nn densenet --common_sense --check_z_height
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-14-19-47-20_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training
GPU 1, Tab 1, port 19998, right v-rep window
REAL STACKING, COMMON SENSE, SPOT - FEB 15 - JUNK DO NOT USE
=================================================================================
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --future_reward_discount 0.65 --place --check_z_height
RESUME COMMAND: export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --future_reward_discount 0.65 --place --check_z_height --resume '/home/costar/src/real_good_robot/logs/2020-02-15-15-55-40_Real-Stack-SPOT-Trial-Reward-Common-Sense-Training'
commit: ea6b6d90967aaadc0d3ef8620f1d3a590cff0757
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-15-14-12-00_Real-Stack-SPOT-Trial-Reward-Common-Sense-Training
GPU 0, Tab 0, port N/A, Real Robot
REAL PUSHING AND GRASPING - SUPER BASIC RUN - FEB 18 - FOR FINAL PAPER RESULTS!!!!!! IMPORTANT - planning on 1000 actions.
====================================================
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --save_visualizations
Commit: 656625133ed3c7d750f99c22b44c82e288c7e6be
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-18-18-58-15_Real-Push-and-Grasp-Two-Step-Reward-Training
GPU 0, Tab 0, port N/A, Real Robot
REAL, PUSHING AND GRASPING, COMMON SENSE, SPOT - FEB 19 - LONG AND GOOD RUN
=================================================================================
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --future_reward_discount 0.65
Commit: 67b792c6a08309c8406de30804d1fe147c9d967f
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-19-15-33-05_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training
> OUTDATED
> Commit: bb5ae93d373a3bbc40786cc23f76eed0ae2ad233
> OUTDATED DUE TO PAUSE BEFORE FIRST TRIAL IS OVER BUG: Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-19-14-28-23_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training
> OUTDATED DUE TO PAUSE BEFORE FIRST TRIAL IS OVER BUG: Resume command: export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --future_reward_discount 0.65 --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-19-14-28-23_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training
GPU 0, Tab 0, port N/A, Real Robot
REAL, STACKING, COMMON SENSE, SPOT - FEB 09 (multi day) - FOR FINAL PAPER RESULTS!!!
====================================================================================
https://github.com/jhu-lcsr/real_good_robot/releases/tag/v0.14.0
Commit: 8e01a12758f25ab3e4535b861bdbb140d8415ce9
> Final Testing Run, 10 trials
> export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_z_height --place --future_reward_discount 0.65 --is_testing --max_test_trials 10 --snapshot_file '/home/costar/src/real_good_robot/logs/2020-02-09-11-02-57_Real-Stack-SPOT-Trial-Reward-Common-Sense-Training/models/snapshot.reinforcement_trial_success_rate_best_value.pth' --random_seed 1238
> Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-19-23-21-59_Real-Stack-SPOT-Trial-Reward-Common-Sense-Testing
SIM ROWS DENSENET - OLD ALGORITHM - WITH NO COMMON SENSE, NO TRIAL REWARD
-------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --save_visualizations --tcp_port 19998 --place --check_row --max_train_actions 10000
Commit: 8b6937f3597815e3cf0c62294d2235ea14c26aec
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-21-20-33-17_Sim-Rows-Two-Step-Reward-Training
GPU 1, Tab 1, port 19998, right v-rep window
SIM STACK DENSENET - OLD ALGORITHM - WITH NO COMMON SENSE, NO TRIAL REWARD (TODO RESUME ME!!!!!)
-------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --save_visualizations --tcp_port 19998 --place --check_z_height --max_train_actions 10000
RESUME: export CUDA_VISIBLE_DEVICES="0" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --save_visualizations --tcp_port 19998 --place --check_z_height --max_train_actions 10000 --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-21-20-33-47_Sim-Stack-Two-Step-Reward-Training
Commit: 8b6937f3597815e3cf0c62294d2235ea14c26aec
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-21-20-33-47_Sim-Stack-Two-Step-Reward-Training
GPU 0, Tab 0, port 19990, left v-rep window
SIM TO REAL STACKING TRAINING COMMAND, Common Sense, SPOT Trial Reward
====================================================================
/media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-22-Sim-to-Real-2020-02-03-16-57-28_Sim-Stack-Trial-Reward-Common-Sense-Training
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_z_height --place --future_reward_discount 0.65 --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-22-Sim-to-Real-2020-02-03-16-57-28_Sim-Stack-Trial-Reward-Common-Sense-Training
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --load_snapshot '/home/costar/src/real_good_robot/logs/2020-02-22-Sim-to-Real-2020-02-03-16-57-28_Sim-Stack-Trial-Reward-Common-Sense-Training/models/snapshot.reinforcement.pth'
Commit: 42f0fc09a2ed776c7089ea346d14509957dd0f5c
GPU 0, Tab 0, port N/A, Real Robot
SIM TO REAL TESTING STACKING - 9 of 10 stack successes.
===================
costar@costar-desktop|/media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot on revert_pixelnet! [0/95245]
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --snapshot_file '/home/costar/src/real_good_robot/logs/2020-02-22-Sim-to-Real-2020-02-03-16-57-28_Sim-Stack-Trial-Reward-Common-Sense-Training/models/snapshot.reinforcement-best-stack-rate.pth'
Pre-trained model snapshot loaded from: /home/costar/src/real_good_robot/logs/2020-02-22-Sim-to-Real-2020-02-03-16-57-28_Sim-Stack-Trial-Reward-Common-Sense-Training/models/snapshot.reinforcement-best-stack-rate.pth
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-22-17-52-17_Real-Stack-SPOT-Trial-Reward-Common-Sense-Testing
2020-02-22-17-52-17_Real-Stack-SPOT-Trial-Reward-Common-Sense-Testing
{"action_efficiency_best_index": 183, "action_efficiency_best_value": 0.29508196721311475, "grasp_success_rate_best_index": 183, "grasp_success_rate_best_value": 0.4263565891472868, "place_success_rate_best_index": 183, "place_success_rate_best_value": 0.7818181818181819, "trial_success_rate_best_index": null, "trial_success_rate_best_value": -Infinity}
TEST-V2, we applied WD-40 to the gripper, but LR still too low
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --snapshot_file '/home/costar/src/real_good_robot/logs/2020-02-22-Sim-to-Real-2020-02-03-16-57-28_Sim-Stack-Trial-Reward-Common-Sense-Training/models/snapshot.reinforcement-best-stack-rate.pth'
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-22-19-54-28_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Testing
TEST-V3
{'trial_success_rate_best_value': 0.9, 'trial_success_rate_best_index': 101, 'grasp_success_rate_best_value': 0.8035714285714286, 'grasp_success_rate_best_index': 101, 'place_success_rate_best_value': 0.8043478260869565, 'place_success_rate_best_index': 101, 'action_efficiency_best_value': 0.594059405940594, 'action_efficiency_best_index': 103}
SIM TO REAL TESTING Pushing and Grasping
========================================
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 1 --snapshot_file '/home/costar/Downloads/snapshot.reinforcement_grasp_action_efficiency_best_value.pth'
from femur: 2020-02-16-21-33-59_Sim-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training
Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-22-19-54-28_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Testing
Grasp Count: 52, grasp success rate: 0.34615384615384615
V2 testing 2020-02-24-0001
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 1 --snapshot_file '/home/costar/Downloads/snapshot.reinforcement_grasp_action_efficiency_best_value.pth'
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-24-01-03-39_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Testing
Max grasp success rate: 0.21621621621621623, at action iteration: 104. (total of 106 actions, max excludes first 104 actions)
Max grasp action efficiency: 0.15384615384615385, at action iteration: 104. (total of 107 actions, max excludes first 104 actions)
saving plot: 2020-02-24-01-16-21_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Testing-Sim-to-Real-Pushing-And-Grasping-SPOT-Q_success_plot.png
saving best stats to: /home/costar/src/real_good_robot/logs/2020-02-24-01-16-21_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Testing/data/best_stats.json
saving best stats to: /home/costar/src/real_good_robot/logs/2020-02-24-01-16-21_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Testing/best_stats.json
{"grasp_action_efficiency_best_index": 104, "grasp_action_efficiency_best_value": 0.15384615384615385, "grasp_success_rate_best_index": 104, "grasp_success_rate_best_value": 0.21621621621621623}
(this first run ended early, not sure what happened. - Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-24-01-03-39_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Testing)
SIM TO REAL TESTING ROWS
========================
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --check_row --num_obj 4 --snapshot_file '/home/costar/Downloads/2020-02-10-18-38-rows-snapshot.reinforcement-best-stack-rate.pth'
REAL, PUSHING AND GRASPING, COMMON SENSE, SPOT - FEB 23 - LONG AND GOOD RUN - in paper
=================================================================================
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --future_reward_discount 0.65
'/home/costar/src/real_good_robot/logs/2020-02-23-11-43-55_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training'
RESUME: export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --future_reward_discount 0.65 --resume '/home/costar/src/real_good_robot/logs/2020-02-23-11-43-55_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training'
Max trial success rate: 1.0, at action iteration: 682. (total of 1032 actions, max excludes first 500 actions)
Max grasp success rate: 0.6054421768707483, at action iteration: 774. (total of 1032 actions, max excludes first 500 actions)
Max grasp action efficiency: 0.534, at action iteration: 774. (total of 1033 actions, max excludes first 500 actions)
saving plot: 2020-02-23-11-43-55_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training-Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training_success_plot.png
saving best stats to: /home/costar/src/real_good_robot/logs/2020-02-23-11-43-55_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training/data/best_stats.json
saving best stats to: /home/costar/src/real_good_robot/logs/2020-02-23-11-43-55_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training/best_stats.json
Training Complete! Dir: /home/costar/src/real_good_robot/logs/2020-02-23-11-43-55_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training
Training results:
{'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 682, 'grasp_success_rate_best_value': 0.6054421768707483, 'grasp_success_rate_best_index': 774, 'grasp_action_efficiency_best_value': 0.534, 'grasp_action_efficiency_best_index': 774}
TESTING push and grasp
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 1 --snapshot_file '/home/costar/src/real_good_robot/logs/2020-02-23-11-43-55_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Training/models/snapshot.reinforcement_grasp_success_rate_best_value.pth'
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-23-18-51-58_Real-Push-and-Grasp-SPOT-Trial-Reward-Common-Sense-Testing
=============================================================
2020-04 and 2020-05
=============================================================
Tab 7: ~/src/CoppeliaSim_Edu_V4_0_0_Ubuntu18_04/coppeliaSim.sh -gREMOTEAPISERVERSERVICE_19990_FALSE_TRUE -s ~/src/real_good_robot/simulation/simulation.ttt
Tab 8: ~/src/CoppeliaSim_Edu_V4_0_0_Ubuntu18_04/coppeliaSim.sh -gREMOTEAPISERVERSERVICE_19998_FALSE_TRUE -s ~/src/real_good_robot/simulation/simulation.ttt
SIM STACK DENSENET - OLD ALGORITHM - WITH NO COMMON SENSE, NO TRIAL REWARD - NO HEIGHT REWARD - 2020-04-25-19-59-01
----------------------------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --save_visualizations --tcp_port 19990 --place --check_z_height --max_train_actions 10000 --no_height_reward --disable_situation_removal
RESUME:
Commit: de5f639ae814bcb1870abe3d8190bebf84abe1ec
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-04-25-21-41-04_Sim-Stack-Two-Step-Reward-Training
IGNORE, FORGOT TO DISABLE SITUATION REMOVAL: Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-04-25-19-59-01_Sim-Stack-Two-Step-Reward-Training
± cat logs/2020-04-25-21-41-04_Sim-Stack-Two-Step-Reward-Training/2020-04-27-17-44-27_Sim-Stack-Two-Step-Reward-Testing/best_stats.json
{"action_efficiency_best_index": 3991, "action_efficiency_best_value": 0.019543973941368076, "grasp_success_rate_best_index": 3991, "grasp_success_rate_best_value": 0.9404958677685951, "place_success_rate_best_index": 3991, "place_success_rate_best_value": 0.5837526959022286, "trial_success_rate_best_index": 3991, "trial_success_rate_best_value": 0.13}%
GPU 0, Tab 4, port 19990, left v-rep window
SIM ROW DENSENET - OLD ALGORITHM - WITH NO COMMON SENSE, NO TRIAL REWARD - NO HEIGHT REWARD - 2020-04-25-20-00-41
--------------------------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --save_visualizations --tcp_port 19998 --place --check_row --max_train_actions 10000 --no_height_reward --disable_situation_removal
Commit: de5f639ae814bcb1870abe3d8190bebf84abe1ec
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-04-25-21-41-35_Sim-Rows-Two-Step-Reward-Training
IGNORE, FORGOT TO DISABLE SITUATION REMOVAL: Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-04-25-20-00-41_Sim-Rows-Two-Step-Reward-Training
± cat logs/2020-04-25-21-41-35_Sim-Rows-Two-Step-Reward-Training/2020-04-27-17-25-31_Sim-Rows-Two-Step-Reward-Testing/best_stats.json
{"action_efficiency_best_index": 2124, "action_efficiency_best_value": 0.00847457627118644, "grasp_success_rate_best_index": 2124, "grasp_success_rate_best_value": 0.5886075949367089, "place_success_rate_best_index": 2124, "place_success_rate_best_value": 0.2507204610951009, "trial_success_rate_best_index": 2124, "trial_success_rate_best_value": 0.13}
GPU 1, Tab 5, port 19998, right v-rep window
XXXX IGNORE XXXX SIM STACK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - costar 2020-04-28-16-15-22
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 10000
RUN HAD PROBLEMS: Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-04-28-16-15-22_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training
Commit: cf8fdeb86eed278fe9cb9b863662e2eaa327ebea
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
XXXX IGNORE XXXX SIM ROW - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - costar 2020-04-28-16-16-15
--------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 10000
RUN HAD PROBLEMS: Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-04-28-16-16-15_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Training
Commit: cf8fdeb86eed278fe9cb9b863662e2eaa327ebea
GPU 1, Tab 1, port 19998, right v-rep window, v-rep tab 8
SIM STACK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - costar 2020-05-01-21-47-56
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 10000
RESUME 20k: ± export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-01-21-47-56_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-01-21-47-56_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training
RESUME: ± export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 10000 --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-01-21-47-56_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training
Commit (crash): dae67d0f89fba1917e4fb89fc82f8f6171330f1f
Commit (resume): 2f9f569f0c9bfd00df480a9dbce1dba8d43b5020
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
SIM ROW - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - costar 2020-05-01-21-48-39
--------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 10000
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-01-21-48-39_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Training
RESUME 20k: ± export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-01-21-48-39_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Training
RESUME: ± export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 10000 --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-01-21-48-39_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Training
RESUME2: ± export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_row --tcp_port 19999 --place --future_reward_discount 0.65 --max_train_actions 10000 --random_actions --resume '/home/ahundt/src/real_good_robot/logs/2020-05-03-20-04-47_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Training'
Commit (crash): dae67d0f89fba1917e4fb89fc82f8f6171330f1f
Commit (resume): 2f9f569f0c9bfd00df480a9dbce1dba8d43b5020
Commit (resume2): 3ba0b91c5accac6387345c62d5a4e8b7ff9769cd
GPU 1, Tab 1, port 19998, right v-rep window, v-rep tab 8
SIM STACK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - NO TWO STEP BACKPROP - SORT TRIAL REWARD - costar
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 10000 --disable_two_step_backprop
2020-05-05-14-26-12_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training
Commit: 3ba0b91c5accac6387345c62d5a4e8b7ff9769cd
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
SIM STACK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - NO TWO STEP BACKPROP - SORT TRIAL REWARD - RANDOM ACTIONS - costar
----------------------------------------------------------------------------------------
± export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 10000 --disable_two_step_backprop --random_actions
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-05-16-44-34_Sim-Stack-SPOT-Trial-Reward-Commo
Commit: 3ba0b91c5accac6387345c62d5a4e8b7ff9769cd
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
SIM STACK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - REWARD SCHEDULE 0.25, 1, 1 - costar 2020-05-06
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
'/home/costar/src/real_good_robot/logs/2020-05-06-10-03-58_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training'
RESUME: export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --resume '/home/costar/src/real_good_robot/logs/2020-05-06-10-03-58_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training'
Commit: d4e776ffb89f6d916ca7ff96ebaf717bfdd45db5
Commit (resume): 7dbec777fd08d9e66b53ec72564880cebdb452e1
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
Max trial success rate: 0.89, at action iteration: 4489. (total of 4491 actions, max excludes first 4489 actions)
Max grasp success rate: 0.6816311260755705, at action iteration: 4489. (total of 4491 actions, max excludes first 4489 actions)
Max place success rate: 0.6521739130434783, at action iteration: 4489. (total of 4492 actions, max excludes first 4489 actions)
Max action efficiency: 0.12029405212742258, at action iteration: 4491. (total of 4492 actions, max excludes first 4489 actions)
saving plot: 2020-05-10-01-20-13_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Testing-Sim-Stack-SPOT-Trial-Reward-Common-Sense-Testing_success_plot.png
saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-10-01-20-13_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Testing/data/best_stats.json
saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-10-01-20-13_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Testing/best_stats.json
Random Testing Complete! Dir: /home/costar/src/real_good_robot/logs/2020-05-06-10-03-58_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training/2020-05-10-01-20-13_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Testing
Random Testing results:
{'trial_success_rate_best_value': 0.89, 'trial_success_rate_best_index': 4489, 'grasp_success_rate_best_value': 0.6816311260755705, 'grasp_success_rate_best_index': 4489, 'place_success_rate_best_value': 0.6521739130434783, 'place_success_rate_best_index': 4489, 'action_efficiency_best_value': 0.12029405212742258, 'action_efficiency_best_index': 4491}
Training Complete! Dir: /home/costar/src/real_good_robot/logs/2020-05-06-10-03-58_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training
Training results:
{'trial_success_rate_best_value': 0.7692307692307693, 'trial_success_rate_best_index': 8827, 'grasp_success_rate_best_value': 0.8487084870848709, 'grasp_success_rate_best_index': 7472, 'place_success_rate_best_value': 0.8325991189427313, 'place_success_rate_best_index': 11194, 'action_efficiency_best_value': 0.552, 'action_efficiency_best_index': 11183}
MANUAL TESTING RUN ON action_efficiency_best_index
> export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --random_actions --snapshot_file '/home/costar/src/real_good_robot/logs/2020-05-06-10-03-58_Sim-Stack-SPOT-Trial-Reward-Common-Sense-Training/models/snapshot.reinforcement_action_efficiency_best_index.pth' --is_testing --save_visualizations --max_test_trials 100 --random_seed 1238
> Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-12-17-48-13_Sim-Stack-SPOT-Trial-Reward-Masked-Testing
> Commit: 13068e53c269b01b1385a3b185d38b006eca762b
> TODO(ahundt) move the testing directory into the training directory once complete
> TODO(ahundt) rerun this, the simulation became unstable because of placing out of arm workspace
>
>
> Testing iteration: 1665
> prev_height: 0.0 max_z: 0.0511079157217398 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
> Current count of pixels with stuff: 5131.0 threshold below which the scene is considered empty: 10
> Change detected: True (value: 5239)
> Trainer.get_label_value(): Current reward: 4.139246 Current reward multiplier: 4.139246 Predicted Future reward: 5.071979 Expected reward: 4.139246 + 0.650000 x 5.071979 = 7.436033
> trial_complete_indices: [ 6. 29. 41. 59. 67. 85. 91. 101. 109. 135. 143. 169.
> 179. 187. 192. 208. 214. 226. 265. 280. 291. 313. 323. 329.
> 342. 351. 361. 383. 387. 420. 439. 450. 477. 485. 506. 524.
> 533. 546. 576. 624. 647. 673. 681. 691. 701. 711. 729. 741.
> 752. 777. 793. 842. 872. 893. 908. 918. 930. 936. 948. 976.
> 993. 1001. 1016. 1041. 1049. 1072. 1078. 1084. 1094. 1102. 1133. 1141.
> 1149. 1168. 1174. 1184. 1210. 1221. 1245. 1255. 1271. 1278. 1293. 1297.
> 1311. 1325. 1331. 1342. 1352. 1360. 1369. 1376. 1386. 1390. 1496. 1512.
> 1522. 1541. 1545. 1636. 1665.]
> Max trial success rate: 0.98, at action iteration: 1662. (total of 1664 actions, max excludes first 1662 actions)
> Max grasp success rate: 0.5967117988394585, at action iteration: 1662. (total of 1664 actions, max excludes first 1662 actions)
> Max place success rate: 0.7615262321144675, at action iteration: 1662. (total of 1665 actions, max excludes first 1662 actions)
> Max action efficiency: 0.36101083032490977, at action iteration: 1664. (total of 1665 actions, max excludes first 1662 actions)
> saving plot: 2020-05-12-17-48-13_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Sim-Stack-SPOT-Trial-Reward-Masked-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-12-17-48-13_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-12-17-48-13_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/best_stats.json
> Trial logging complete: 101 --------------------------------------------------------------
SIM ROW - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - RANDOM ACTIONS - REWARD SCHEDULE 0.25, 1, 1 - costar 2020-05-06
----------------------------------------------------------------------------------------
± export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
'/home/costar/src/real_good_robot/logs/2020-05-06-09-59-31_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Training'
RESUME: ± export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --resume /home/costar/src/real_good_robot/logs/2020-05-06-09-59-31_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Training
Commit: d4e776ffb89f6d916ca7ff96ebaf717bfdd45db5
Commit (resume): 7dbec777fd08d9e66b53ec72564880cebdb452e1
Commit (resume2): 67bf4b2a56a4aac72a460d5d8598d38a2daac0fd
Commit (resume3 - check for full row on every place): c6c4b401fe719aae89966adaf9ed5ca24cf95fde
GPU 1, Tab 1, port 19998, right v-rep window, v-rep tab 8
Max trial success rate: 0.67, at action iteration: 2523. (total of 2525 actions, max excludes first 2523 actions)
Max grasp success rate: 0.63566388710712, at action iteration: 2523. (total of 2525 actions, max excludes first 2523 actions)
/media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:178: RuntimeWarning: Mean of empty slice.
success_rate[i] = successes.mean()
/home/costar/.local/lib/python3.6/site-packages/numpy/core/_methods.py:161: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
/media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:180: RuntimeWarning: invalid value encountered in double_scalars
var = np.sqrt(success_rate[i] * (1 - success_rate[i]) / successes.shape[0])
Max action efficiency: 0.26634958382877527, at action iteration: 2525. (total of 2526 actions, max excludes first 2523 actions)
saving plot: 2020-05-10-00-30-05_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Testing-Sim-Rows-SPOT-Trial-Reward-Common-Sense-Testing_success_plot.png
saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-10-00-30-05_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Testing/data/best_stats.json
saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-10-00-30-05_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Testing/best_stats.json
Random Testing Complete! Dir: /home/costar/src/real_good_robot/logs/2020-05-06-09-59-31_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Training/2020-05-10-00-30-05_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Testing
Random Testing results:
{'trial_success_rate_best_value': 0.67, 'trial_success_rate_best_index': 2523, 'grasp_success_rate_best_value': 0.63566388710712, 'grasp_success_rate_best_index': 2523, 'place_success_rate_best_value': -inf, 'place_success_rate_best_index': None, 'action_efficiency_best_value': 0.26634958382877527, 'action_efficiency_best_index': 2525}
Training Complete! Dir: /home/costar/src/real_good_robot/logs/2020-05-06-09-59-31_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Training
Training results:
{'trial_success_rate_best_value': 0.5045871559633027, 'trial_success_rate_best_index': 12648, 'grasp_success_rate_best_value': 0.7074829931972789, 'grasp_success_rate_best_index': 12492, 'place_success_rate_best_value': 0.8138297872340425, 'place_success_rate_best_index': 11971, 'action_efficiency_best_value': 0.696, 'action_efficiency_best_index': 12624}
MANUAL TESTING RUN ON action_efficiency_best_index
> export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --is_testing --max_test_trials 100 --random_seed 1238 --snapshot_file '/home/costar/src/real_good_robot/logs/2020-05-06-09-59-31_Sim-Rows-SPOT-Trial-Reward-Common-Sense-Training/models/snapshot.reinforcement_action_efficiency_best_index.pth'
> Commit: 13068e53c269b01b1385a3b185d38b006eca762b
> Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-12-18-04-15_Sim-Rows-SPOT-Trial-Reward-Masked-Testing
>
> TRIAL 100 SUCCESS!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> prev_height: 0.0 max_z: 0.05112534729294889 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
> STACK: trial: 101 actions/partial: 6.153191489361702 actions/full stack: 16.431818181818183 (lower is better) Grasp Count: 866, grasp success rate: 0.6812933025404158 place_on_stack_rate: 0.4051724137931034 place_attempts: 580 partial_stack_successes: 235 stack_successes: 88 trial_success_r
> ate: 0.8712871287128713 stack goal: [2 1 3] current_height: 4
> Time elapsed: 26.519145
> Trainer iteration: 1445 complete
>
> Testing iteration: 1446
> prev_height: 0.0 max_z: 0.05111105777395382 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
> Current count of pixels with stuff: 2593.0 threshold below which the scene is considered empty: 900
> Change detected: True (value: 4472)
> Trainer.get_label_value(): Current reward: 4.000000 Current reward multiplier: 4.000000 Predicted Future reward: 6.469117 Expected reward: 4.000000 + 0.650000 x 6.469117 = 8.204926
> trial_complete_indices: [ 8. 12. 19. 24. 28. 32. 34. 53. 65. 71. 77. 102.
> 107. 113. 127. 185. 189. 193. 197. 199. 226. 230. 233. 239.
> 255. 268. 296. 300. 305. 308. 317. 327. 331. 335. 410. 424.
> 430. 448. 463. 467. 489. 550. 555. 559. 568. 572. 578. 588.
> 724. 746. 750. 765. 771. 780. 825. 836. 853. 855. 862. 866.
> 878. 921. 925. 927. 931. 935. 971. 984. 990. 1016. 1074. 1107.
> 1115. 1133. 1138. 1146. 1150. 1156. 1166. 1172. 1186. 1199. 1203. 1259.
> 1274. 1280. 1286. 1290. 1292. 1298. 1318. 1331. 1338. 1342. 1353. 1361.
> 1365. 1389. 1419. 1442. 1446.]
> Max trial success rate: 0.86, at action iteration: 1443. (total of 1445 actions, max excludes first 1443 actions)
> Max grasp success rate: 0.6809248554913295, at action iteration: 1443. (total of 1445 actions, max excludes first 1443 actions)
> Max place success rate: 0.7582037996545768, at action iteration: 1445. (total of 1446 actions, max excludes first 1443 actions) Max action efficiency: 0.3700623700623701, at action iteration: 1445. (total of 1446 actions, max excludes first 1443 actions)
> saving plot: 2020-05-12-18-04-15_Sim-Rows-SPOT-Trial-Reward-Masked-Testing-Sim-Rows-SPOT-Trial-Reward-Masked-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-12-18-04-15_Sim-Rows-SPOT-Trial-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-12-18-04-15_Sim-Rows-SPOT-Trial-Reward-Masked-Testing/best_stats.json
> Trial logging complete: 101 --------------------------------------------------------------
SIM STACK - SPOT-Q-MASKED - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-13
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
RESUME: export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-51-39_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-51-39_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Commit: ccf30348e265a471080b3ee906065e059f6e8573
Commmit (resume, for testing, training complete): 41d2eaff3dc0f3572cecf43805de8582d62d9b31
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
> Testing run prioritize trial success
> STACK: trial: 101 actions/partial: 4.525547445255475 actions/full stack: 18.979591836734695 (lower is better) Grasp Count: 1090, grasp success rate: 0.7018348623853211 place_on_stack_rate: 0.5379581151832461 place_attempts: 764 partial_stack_successes: 411 stack_successes: 98 trial_success_rate: 0.9702970297029703 stack goal: None current_height: 2.0596242913479084
> trial_complete_indices: [ 23. 29. 83. 87. 93. 98. 122. 154. 165. 188. 205. 213.
> 227. 233. 239. 279. 285. 294. 300. 334. 350. 359. 367. 437.
> 441. 447. 453. 642. 651. 669. 676. 685. 690. 696. 703. 727.
> 733. 750. 763. 769. 778. 784. 790. 805. 861. 911. 924. 931.
> 965. 972. 980. 996. 1009. 1015. 1023. 1108. 1115. 1127. 1142. 1173.
> 1179. 1186. 1208. 1268. 1274. 1294. 1307. 1313. 1325. 1329. 1335. 1377.
> 1381. 1415. 1453. 1472. 1484. 1503. 1511. 1517. 1526. 1537. 1546. 1554.
> 1605. 1612. 1622. 1630. 1676. 1707. 1716. 1740. 1747. 1753. 1775. 1803.
> 1814. 1822. 1836. 1851. 1859.]
> Max trial success rate: 0.97, at action iteration: 1856. (total of 1858 actions, max excludes first 1856 actions)
> Max grasp success rate: 0.7022058823529411, at action iteration: 1856. (total of 1858 actions, max excludes first 1856 actions)
> Max place success rate: 0.7451235370611183, at action iteration: 1856. (total of 1859 actions, max excludes first 1856 actions)
> Max action efficiency: 0.32004310344827586, at action iteration: 1858. (total of 1859 actions, max excludes first 1856 actions)
> saving plot: 2020-05-17-13-07-19_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Sim-Stack-SPOT-Trial-Reward-Masked-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-17-13-07-19_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-17-13-07-19_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/best_stats.json
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-51-39_Sim-Stack-SPOT-Trial-Reward-Masked-Training/2020-05-17-13-07-19_Sim-Stack-SPOT-Trial-Reward-Masked-Testing
> Random Testing results:
> {'trial_success_rate_best_value': 0.97, 'trial_success_rate_best_index': 1856, 'grasp_success_rate_best_value': 0.7022058823529411, 'grasp_success_rate_best_index': 1856, 'place_success_rate_best_value': 0.7451235370611183, 'place_success_rate_best_index': 1856, 'action_efficiency_best_value': 0.32004310344827586, 'action_efficiency_best_index': 1858}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-51-39_Sim-Stack-SPOT-Trial-Reward-Masked-Training
> Training results:
> {'action_efficiency_best_index': 10008, 'action_efficiency_best_value': 0.684, 'grasp_success_rate_best_index': 15783, 'grasp_success_rate_best_value': 0.8834586466165414, 'place_success_rate_best_index': 17570, 'place_success_rate_best_value': 0.8616071428571429, 'trial_success_rate_best_index': 10011, 'trial_success_rate_best_value': 0.8507462686567164}
> *********** 100% trial success testing **********
> Testing run prioritizing action efficiency:
> {"action_efficiency_best_index": 1325, "action_efficiency_best_value": 0.4580498866213152, "grasp_success_rate_best_index": 1323, "grasp_success_rate_best_value": 0.7697456492637216, "place_success_rate_best_index": 1325, "place_success_rate_best_value": 0.7885615251299827, "trial_success_rate_best_index": 1323, "trial_success_rate_best_value": 1.0}
> *********** 100% trial success testing **********
SIM ROW - SPOT-Q-MASKED - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - RANDOM ACTIONS - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-13
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
RESUME: export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-21-00_Sim-Rows-SPOT-Trial-Reward-Masked-Training
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-21-00_Sim-Rows-SPOT-Trial-Reward-Masked-Training
Commit: ccf30348e265a471080b3ee906065e059f6e8573
Commmit (resume, for testing, training complete): 41d2eaff3dc0f3572cecf43805de8582d62d9b31
GPU 1, Tab 1, port 19998, right v-rep window, v-rep tab 8
> Testing run prioritize trial successes
> STACK: trial: 101 actions/partial: 6.809885931558935 actions/full stack: 18.852631578947367 (lower is better) Grasp Count: 1085, grasp success rate: 0.6552995391705069 place_on_stack_rate: 0.37252124645892354 place_attempts: 706 partial_stack_successes: 263 stack_successes: 95 trial_success_rate: 0.9405940594059405 stack goal: [2] current_height: 1
> trial_complete_indices: [ 6. 10. 18. 28. 58. 62. 69. 87. 94. 102. 108. 115.
> 121. 175. 181. 186. 188. 196. 201. 208. 217. 219. 228. 258.
> 269. 296. 340. 346. 350. 354. 366. 469. 474. 478. 490. 494.
> 505. 521. 547. 557. 599. 628. 648. 676. 680. 684. 740. 772.
> 783. 794. 799. 812. 817. 908. 916. 925. 931. 940. 947. 953.
> 966. 981. 991. 1000. 1066. 1146. 1154. 1188. 1196. 1200. 1294. 1308.
> 1312. 1320. 1324. 1338. 1344. 1381. 1389. 1397. 1401. 1417. 1441. 1481.
> 1485. 1491. 1546. 1582. 1588. 1595. 1601. 1648. 1664. 1706. 1720. 1727.
> 1729. 1767. 1769. 1785. 1790.]
> Max trial success rate: 0.94, at action iteration: 1787. (total of 1789 actions, max excludes first 1787 actions)
> Max grasp success rate: 0.6555863342566943, at action iteration: 1787. (total of 1789 actions, max excludes first 1787 actions)
> Max action efficiency: 0.37604924454392835, at action iteration: 1789. (total of 1790 actions, max excludes first 1787 actions)
> saving plot: 2020-05-17-13-08-59_Sim-Rows-SPOT-Trial-Reward-Masked-Testing-Sim-Rows-SPOT-Trial-Reward-Masked-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-17-13-08-59_Sim-Rows-SPOT-Trial-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-17-13-08-59_Sim-Rows-SPOT-Trial-Reward-Masked-Testing/best_stats.json
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-21-00_Sim-Rows-SPOT-Trial-Reward-Masked-Training/2020-05-17-13-08-59_Sim-Rows-SPOT-Trial-Reward-Masked-Testing
> Random Testing results:
> Random testing results after manual bugfix: {"action_efficiency_best_index": 1789, "action_efficiency_best_value": 0.3764705882352941, "grasp_success_rate_best_index": 1785, "grasp_success_rate_best_value": 0.6561922365988909, "place_success_rate_best_index": 1785, "place_success_rate_best_value": 0.7634560906515581, "trial_success_rate_best_index": 1787, "trial_success_rate_best_value": 0.94}
> XXX SEE correction above {'trial_success_rate_best_value': 0.94, 'trial_success_rate_best_index': 1787, 'grasp_success_rate_best_value': 0.6555863342566943, 'grasp_success_rate_best_index': 1787, 'place_success_rate_best_value': -inf, 'place_success_rate_best_index': None, 'action_efficiency_best_value': 0.37604924454392835, 'action_efficiency_best_index': 1789}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-21-00_Sim-Rows-SPOT-Trial-Reward-Masked-Training
> Training results:
> {'action_efficiency_best_index': 12997, 'action_efficiency_best_value': 0.672, 'grasp_success_rate_best_index': 17482, 'grasp_success_rate_best_value': 0.6845637583892618, 'place_success_rate_best_index': 12935, 'place_success_rate_best_value': 0.8296703296703297, 'trial_success_rate_best_index': 12367, 'trial_success_rate_best_value': 0.5662650602409639}
> Testing run prioritizing action efficiency
> STACK: trial: 101 actions/partial: 6.560975609756097 actions/full stack: 17.543478260869566 (lower is better) Grasp Count: 1018, grasp success rate: 0.5943025540275049 place_on_stack_rate: 0.412751677852349 place_attempts: 596 partial_stack_successes: 246 stack_successes: 92 trial_success_rate: 0.9108910891089109 stack goal: [0 2] current_height: 2
> trial_complete_indices: [ 16. 23. 31. 58. 76. 81. 100. 104. 157. 168. 172. 252.
> 262. 266. 271. 360. 393. 397. 405. 411. 415. 423. 432. 437.
> 445. 460. 473. 571. 579. 588. 592. 599. 619. 626. 632. 747.
> 753. 769. 782. 788. 794. 816. 877. 887. 897. 903. 905. 974.
> 978. 986. 994. 1005. 1012. 1016. 1020. 1026. 1039. 1043. 1088. 1137.
> 1139. 1148. 1155. 1164. 1172. 1183. 1233. 1242. 1253. 1257. 1264. 1275.
> 1287. 1290. 1297. 1309. 1313. 1317. 1322. 1342. 1350. 1354. 1359. 1402.
> 1443. 1452. 1458. 1462. 1469. 1473. 1482. 1489. 1495. 1504. 1506. 1518.
> 1523. 1532. 1541. 1569. 1613.]
> Max trial success rate: 0.92, at action iteration: 1610. (total of 1612 actions, max excludes first 1610 actions)
> Max grasp success rate: 0.5950738916256157, at action iteration: 1610. (total of 1612 actions, max excludes first 1610 actions)
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:178: RuntimeWarning: Mean of empty slice.
> success_rate[i] = successes.mean()
> /home/costar/.local/lib/python3.6/site-packages/numpy/core/_methods.py:161: RuntimeWarning: invalid value encountered in double_scalars
> ret = ret.dtype.type(ret / rcount)
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:180: RuntimeWarning: invalid value encountered in double_scalars
> var = np.sqrt(success_rate[i] * (1 - success_rate[i]) / successes.shape[0])
> Max action efficiency: 0.3875776397515528, at action iteration: 1610. (total of 1613 actions, max excludes first 1610 actions)
> saving plot: 2020-05-17-22-05-52_Sim-Rows-SPOT-Trial-Reward-Masked-Testing-Sim-Rows-SPOT-Trial-Reward-Masked-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-17-22-05-52_Sim-Rows-SPOT-Trial-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-17-22-05-52_Sim-Rows-SPOT-Trial-Reward-Masked-Testing/best_stats.json
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-21-00_Sim-Rows-SPOT-Trial-Reward-Masked-Training/2020-05-17-22-05-52_Sim-Rows-SPOT-Trial-Reward-Masked-Testing
> Random Testing results:
> {'trial_success_rate_best_value': 0.92, 'trial_success_rate_best_index': 1610, 'grasp_success_rate_best_value': 0.5950738916256157, 'grasp_success_rate_best_index': 1610, 'place_success_rate_best_value': -inf, 'place_success_rate_best_index': None, 'action_efficiency_best_value': 0.3875776397515528, 'action_efficiency_best_index': 1610}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-21-00_Sim-Rows-SPOT-Trial-Reward-Masked-Training
> Training results:
> {'action_efficiency_best_index': 12997, 'action_efficiency_best_value': 0.672, 'grasp_success_rate_best_index': 17482, 'grasp_success_rate_best_value': 0.6845637583892618, 'place_success_rate_best_index': 12935, 'place_success_rate_best_value':
> 0.8296703296703297, 'trial_success_rate_best_index': 12367, 'trial_success_rate_best_value': 0.5662650602409639}
SIM STACK - SPOT STANDARD - TRIAL REWARD - RANDOM ACTIONS - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-18
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
RESUME: export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-18-19-56-49_Sim-Stack-SPOT-Trial-Reward-Training
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-18-19-56-49_Sim-Stack-SPOT-Trial-Reward-Training
Commit: e99391ae3c0921bd95b5b5d2a7d6e992efa69d63
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
> Test run
> {"action_efficiency_best_index": 1966, "action_efficiency_best_value": 0.29633401221995925, "grasp_success_rate_best_index": 1964, "grasp_success_rate_best_value": 0.689328743545611, "place_success_rate_best_index": 1964, "place_success_rate_best_value": 0.6961394769613948, "trial_success_rate_best_index": 1964, "trial_success_rate_best_value": 0.95}
> '/home/costar/src/real_good_robot/logs/2020-05-18-19-56-49_Sim-Stack-SPOT-Trial-Reward-Training/2020-05-22-12-55-27_Sim-Stack-SPOT-Trial-Reward-Testing/best_stats.json'
SIM ROW - SPOT STANDARD - TRIAL REWARD - RANDOM ACTIONS - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-18
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-18-19-57-17_Sim-Rows-SPOT-Trial-Reward-Training
Commit: e99391ae3c0921bd95b5b5d2a7d6e992efa69d63
GPU 1, Tab 1, port 19998, right v-rep window, v-rep tab 8
> testing run prioritizing action efficiency
> export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --resume '/home/costar/src/real_good_robot/logs/2020-05-18-19-57-17_Sim-Rows-SPOT-Trial-Reward-Training'
> Trial logging complete: 101 --------------------------------------------------------------
> Running two step backprop()
> Primitive confidence scores: 0.783435 (push), 4.346649 (grasp), 8.664590 (place)
> Action: grasp at (8, 87, 152)
> Training loss: 0.981009
> Executing: grasp at (-0.420000, -0.050000, 0.050996) orientation: 3.141593
> gripper position: 0.029739439487457275
> gripper position: 0.02550262212753296
> gripper position: 0.004024624824523926
> gripper position: 0.003876298666000366
> Grasp successful: True
> prev_height: 0.0 max_z: 0.051110193888699765 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
> check_row: True | row_size: 2 | blocks: ['blue' 'yellow']
> check_stack() stack_height: 2 stack matches current goal: True partial_stack_success: True Does the code think a reset is needed: False
> STACK: trial: 101 actions/partial: 13.72289156626506 actions/full stack: 39.275862068965516 (lower is better) Grasp Count: 1899, grasp success rate: 0.8072669826224329 place_on_stack_rate: 0.16403162055335968 place_attempts: 1518 partial_stack_successes: 249 stack_successes: 87 trial_success_rate: 0.8613861386138614 stack goal: [0 1] current_height: 2
> trial_complete_indices: [ 143. 210. 268. 274. 280. 305. 307. 319. 323. 388. 450. 454.
> 475. 477. 624. 643. 649. 656. 667. 671. 675. 776. 788. 817.
> 821. 846. 850. 927. 929. 942. 981. 994. 1003. 1007. 1016. 1059.
> 1224. 1234. 1240. 1251. 1257. 1261. 1263. 1307. 1310. 1321. 1325. 1334.
> 1342. 1351. 1361. 1371. 1391. 1665. 1670. 1674. 1680. 1686. 1693. 1701.
> 1717. 1723. 1742. 1759. 1763. 1769. 1823. 1831. 1848. 1854. 2335. 2597.
> 2605. 2618. 2624. 2632. 2636. 2744. 2758. 2766. 2772. 2959. 2961. 2967.
> 2981. 2983. 2985. 3006. 3010. 3019. 3170. 3182. 3206. 3214. 3216. 3220.
> 3224. 3235. 3283. 3367. 3416.]
> Max trial success rate: 0.87, at action iteration: 3413. (total of 3415 actions, max excludes first 3413 actions)
> Max grasp success rate: 0.8074894514767933, at action iteration: 3413. (total of 3415 actions, max excludes first 3413 actions)
> Max place success rate: 0.6021080368906456, at action iteration: 3413. (total of 3414 actions, max excludes first 3413 actions)
> Max action efficiency: 0.1652505127453853, at action iteration: 3413. (total of 3416 actions, max excludes first 3413 actions)
> saving plot: 2020-05-23-16-29-39_Sim-Rows-SPOT-Trial-Reward-Testing-Sim-Rows-SPOT-Trial-Reward-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-23-16-29-39_Sim-Rows-SPOT-Trial-Reward-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-23-16-29-39_Sim-Rows-SPOT-Trial-Reward-Testing/best_stats.json
> Choosing a snapshot from the following options:{'action_efficiency_best_index': 19725, 'action_efficiency_best_value': 0.576, 'grasp_success_rate_best_index': 17982, 'grasp_success_rate_best_value': 0.9609375, 'place_success_rate_best_index': 1949, 'place_success_rate_best_value': 0.8333333333333334, 'trial_success_rate_best_index': 18012, 'trial_success_rate_best_value': 0.5714285714285714}
> Evaluating trial_success_rate_best_value
> The trial_success_rate_best_value is fantastic at 0.5714285714285714, so we will look for the best action_efficiency_best_value.
> Shapshot chosen: /home/costar/src/real_good_robot/logs/2020-05-18-19-57-17_Sim-Rows-SPOT-Trial-Reward-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth
> Random Testing Complete! Dir: /home/costar/src/real_good_robot/logs/2020-05-18-19-57-17_Sim-Rows-SPOT-Trial-Reward-Training/2020-05-23-16-29-39_Sim-Rows-SPOT-Trial-Reward-Testing
> Random Testing results:
> {'trial_success_rate_best_value': 0.87, 'trial_success_rate_best_index': 3413, 'grasp_success_rate_best_value': 0.8074894514767933, 'grasp_success_rate_best_index': 3413, 'place_success_rate_best_value': 0.6021080368906456, 'place_success_rate_best_index': 3413, 'action_efficiency_best_value': 0.1652505127453853, 'action_efficiency_best_index': 3413}
> Training Complete! Dir: /home/costar/src/real_good_robot/logs/2020-05-18-19-57-17_Sim-Rows-SPOT-Trial-Reward-Training
> Training results:
> {'action_efficiency_best_index': 19725, 'action_efficiency_best_value': 0.576, 'grasp_success_rate_best_index': 17982, 'grasp_success_rate_best_value': 0.9609375, 'place_success_rate_best_index': 1949, 'place_success_rate_best_value': 0.8333333333333334, 'trial_success_rate_best_index': 18012, 'trial_success_rate_best_value': 0.5714285714285714}
SIM STACK - SPOT Masked no SPOT-Q (alg 1 if statement mask backprop is disabled) - TRIAL REWARD - RANDOM ACTIONS - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-23
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --trial_reward --common_sense --no_common_sense_backprop
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-23-14-31-09_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Commit: a534735959ec2747c3b134a6d3067135a5c7bd75 release tag:v0.16.0
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
> Trial logging complete: 101 --------------------------------------------------------------
> Running two step backprop()
> Primitive confidence scores: 1.434610 (push), 1.922926 (grasp), 2.361718 (place)
> Action: grasp at (4, 151, 135)
> Training loss: 3.844077
> Executing: grasp at (-0.454000, 0.078000, 0.001002) orientation: 1.570796
> gripper position: 0.030432865023612976
> gripper position: 0.026735419407486916
> gripper position: 0.0015385448932647705
> gripper position: -0.02276727557182312
> gripper position: -0.042291462421417236
> Grasp successful: False
> prev_height: 0.0 max_z: 0.10307922494011586 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
> prev_height: 1.0 max_z: 2.061584498802317 goal_success: True needed to reset: False max_workspace_height: 0.6 <<<<<<<<<<<
> check_stack() stack_height: 2.061584498802317 stack matches current goal: True partial_stack_success: True Does the code think a reset is needed: False
> STACK: trial: 101 actions/partial: 3.2612612612612613 actions/full stack: 10.86 (lower is better) Grasp Count: 593, grasp success rate: 0.8145025295109612 place_on_stack_rate: 0.6894409937888198 place_attempts: 483 partial_stack_successes: 333 stack_successes: 100 trial_success_rate: 0.9900990099009901 stack goal: None current_height: 2.061584498802317
> trial_complete_indices: [ 6. 18. 24. 30. 76. 82. 86. 95. 130. 136. 140. 161.
> 167. 177. 183. 199. 212. 224. 233. 239. 245. 251. 260. 272.
> 278. 284. 294. 300. 304. 317. 327. 335. 343. 347. 353. 360.
> 371. 391. 399. 405. 413. 417. 425. 431. 437. 445. 451. 455.
> 476. 486. 497. 518. 524. 549. 559. 565. 577. 582. 616. 625.
> 631. 639. 649. 655. 671. 685. 694. 698. 704. 724. 731. 742.
> 762. 827. 833. 841. 847. 854. 860. 867. 873. 908. 914. 934.
> 940. 946. 952. 963. 969. 977. 993. 999. 1007. 1016. 1027. 1038.
> 1044. 1053. 1065. 1071. 1085.]
> Max trial success rate: 0.99, at action iteration: 1082. (total of 1084 actions, max excludes first 1082 actions)
> Max grasp success rate: 0.8155668358714044, at action iteration: 1082. (total of 1084 actions, max excludes first 1082 actions)
> Max place success rate: 0.790650406504065, at action iteration: 1082. (total of 1085 actions, max excludes first 1082 actions)
> Max action efficiency: 0.5545286506469501, at action iteration: 1084. (total of 1085 actions, max excludes first 1082 actions)
> saving plot: 2020-05-27-04-58-39_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Sim-Stack-SPOT-Trial-Reward-Masked-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-27-04-58-39_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-27-04-58-39_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/best_stats.json
> Choosing a snapshot from the following options:{'trial_success_rate_best_value': 0.8913043478260869, 'trial_success_rate_best_index': 16675, 'grasp_success_rate_best_value': 0.8388278388278388, 'grasp_success_rate_best_index': 19892, 'place_success_rate_best_value': 0.8356164383561644, 'place_success_rate_best_index': 15066, 'action_efficiency_best_value': 0.576, 'action_efficiency_best_index': 18579}
> Evaluating trial_success_rate_best_value
> The trial_success_rate_best_value is fantastic at 0.8913043478260869, so we will look for the best action_efficiency_best_value.
> Shapshot chosen: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-23-14-31-09_Sim-Stack-SPOT-Trial-Reward-Masked-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-23-14-31-09_Sim-Stack-SPOT-Trial-Reward-Masked-Training/2020-05-27-04-58-39_Sim-Stack-SPOT-Trial-Reward-Masked-Testing
> Random Testing results:
> {'trial_success_rate_best_value': 0.99, 'trial_success_rate_best_index': 1082, 'grasp_success_rate_best_value': 0.8155668358714044, 'grasp_success_rate_best_index': 1082, 'place_success_rate_best_value': 0.790650406504065, 'place_success_rate_best_index': 1082, 'action_efficiency_best_value': 0.5545286506469501, 'action_efficiency_best_index': 1084}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-23-14-31-09_Sim-Stack-SPOT-Trial-Reward-Masked-Training
> Training results:
> {'trial_success_rate_best_value': 0.8913043478260869, 'trial_success_rate_best_index': 16675, 'grasp_success_rate_best_value': 0.8388278388278388, 'grasp_success_rate_best_index': 19892, 'place_success_rate_best_value': 0.8356164383561644, 'place_success_rate_best_index': 15066, 'action_efficiency_best_value': 0.576, 'action_efficiency_best_index': 18579}
SIM ROW - SPOT Masked no SPOT-Q (alg 1 if statement mask backprop is disabled) - TRIAL REWARD - RANDOM ACTIONS - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-23
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --trial_reward --common_sense --no_common_sense_backprop
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-24-09-36-39_Sim-Rows-SPOT-Trial-Reward-Masked-Training
Commit: a534735959ec2747c3b134a6d3067135a5c7bd75 release tag:v0.16.0
GPU 1, Tab 1, port 19998, right v-rep window, v-rep tab 8
> /home/costar/src/real_good_robot/logs/2020-05-24-09-36-39_Sim-Rows-SPOT-Trial-Reward-Masked-Training/2020-05-28-03-27-32_Sim-Rows-SPOT-Trial-Reward-Masked-Testing/best_stats.json
> {"action_efficiency_best_index": 1189, "action_efficiency_best_value": 0.4894869638351556, "grasp_success_rate_best_index": 1189, "grasp_success_rate_best_value": 0.7434402332361516, "place_success_rate_best_index": 1189, "place_success_rate_best_value": 0.8452380952380952, "trial_success_rate_best_index": 1189, "trial_success_rate_best_value": 0.93}
SIM STACK - SPOT STANDARD progress TRIAL aka rtrial - TRIAL REWARD - RANDOM ACTIONS - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-27
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
RESUME: ± export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-27-12-35-08_Sim-Stack-SPOT-Trial-Reward-Training
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-27-12-35-08_Sim-Stack-SPOT-Trial-Reward-Training
Commit: a534735959ec2747c3b134a6d3067135a5c7bd75 release tag:v0.16.0
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
> Trial logging complete: 101 --------------------------------------------------------------
> Running two step backprop()
> Primitive confidence scores: 0.926641 (push), 2.734785 (grasp), 6.851398 (place)
> Action: grasp at (0, 103, 151)
> Training loss: 0.528587
> Executing: grasp at (-0.422000, -0.018000, 0.001003) orientation: 0.000000
> gripper position: 0.03009691834449768
> gripper position: 0.0258101224899292
> gripper position: 0.0006317198276519775
> gripper position: -0.02364581823348999
> gripper position: -0.04264447093009949
> Grasp successful: False
> prev_height: 0.0 max_z: 0.051131368098522104 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
> prev_height: 1.0 max_z: 1.022627361970442 goal_success: False needed to reset: False max_workspace_height: 0.6 <<<<<<<<<<<
> check_stack() stack_height: 1.022627361970442 stack matches current goal: False partial_stack_success: False Does the code think a reset is needed: False
> STACK: trial: 101 actions/partial: 4.434782608695652 actions/full stack: 18.545454545454547 (lower is better) Grasp Count: 1082, grasp success rate: 0.6977818853974121 place_on_stack_rate: 0.5490716180371353 place_attempts: 754 partial_stack_successes: 414 stack_successes: 99 trial_success_rate: 0.9801980198019802 stack goal: None current_height: 1.022627361970442
> trial_complete_indices: [ 10. 16. 26. 66. 72. 118. 124. 128. 136. 186. 235. 241.
> 252. 263. 382. 411. 444. 454. 458. 491. 529. 566. 579. 586.
> 671. 680. 781. 787. 795. 827. 875. 883. 891. 919. 931. 942.
> 957. 974. 982. 997. 1001. 1009. 1019. 1031. 1044. 1066. 1107. 1134.
> 1155. 1163. 1169. 1192. 1201. 1219. 1225. 1248. 1254. 1281. 1301. 1311.
> 1317. 1345. 1351. 1390. 1394. 1404. 1415. 1421. 1431. 1452. 1458. 1469.
> 1475. 1485. 1522. 1542. 1549. 1563. 1578. 1585. 1609. 1620. 1642. 1646.
> 1658. 1669. 1679. 1705. 1711. 1717. 1725. 1736. 1742. 1748. 1762. 1785.
> 1793. 1806. 1813. 1818. 1835.]
> Max trial success rate: 0.98, at action iteration: 1832. (total of 1834 actions, max excludes first 1832 actions)
> Max grasp success rate: 0.6981481481481482, at action iteration: 1832. (total of 1834 actions, max excludes first 1832 actions)
> Max place success rate: 0.7861885790172642, at action iteration: 1834. (total of 1835 actions, max excludes first 1832 actions)
> Max action efficiency: 0.324235807860262, at action iteration: 1834. (total of 1835 actions, max excludes first 1832 actions)
> saving plot: 2020-05-31-05-18-07_Sim-Stack-SPOT-Trial-Reward-Testing-Sim-Stack-SPOT-Trial-Reward-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-31-05-18-07_Sim-Stack-SPOT-Trial-Reward-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-31-05-18-07_Sim-Stack-SPOT-Trial-Reward-Testing/best_stats.json
> Choosing a snapshot from the following options:{'trial_success_rate_best_value': 0.8157894736842105, 'trial_success_rate_best_index': 10807, 'grasp_success_rate_best_value': 0.8550185873605948, 'grasp_success_rate_best_index': 10825, 'place_success_rate_best_value': 0.7741935483870968, 'place_success_rate_best_index': 13745, 'action_efficiency_best_value': 0.384, 'action_efficiency_best_index': 10746}
> Evaluating trial_success_rate_best_value
> Shapshot chosen: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-27-12-35-08_Sim-Stack-SPOT-Trial-Reward-Training/models/snapshot.reinforcement_trial_success_rate_best_value.pth
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-27-12-35-08_Sim-Stack-SPOT-Trial-Reward-Training/2020-05-31-05-18-07_Sim-Stack-SPOT-Trial-Reward-Testing
> Random Testing results:
> {'trial_success_rate_best_value': 0.98, 'trial_success_rate_best_index': 1832, 'grasp_success_rate_best_value': 0.6981481481481482, 'grasp_success_rate_best_index': 1832, 'place_success_rate_best_value': 0.7861885790172642, 'place_success_rate_best_index': 1834, 'action_efficiency_best_value': 0.324235807860262, 'action_efficiency_best_index': 1834}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-27-12-35-08_Sim-Stack-SPOT-Trial-Reward-Training
> Training results:
> {'trial_success_rate_best_value': 0.8157894736842105, 'trial_success_rate_best_index': 10807, 'grasp_success_rate_best_value': 0.8550185873605948, 'grasp_success_rate_best_index': 10825, 'place_success_rate_best_value': 0.7741935483870968, 'place_success_rate_best_index': 13745, 'action_efficiency_best_value': 0.384, 'action_efficiency_best_index': 10746}
XXXX BAD RUN XXXX - SIM ROW - SPOT STANDARD progress TRIAL aka rtrial - TRIAL REWARD - RANDOM ACTIONS - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-23
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
RESUME: export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-28-10-46-31_Sim-Rows-SPOT-Trial-Reward-Training
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-28-10-46-31_Sim-Rows-SPOT-Trial-Reward-Training
Commit: a534735959ec2747c3b134a6d3067135a5c7bd75 release tag:v0.16.0
GPU 1, Tab 1, port 19998, right v-rep window, v-rep tab 8
> The simulator state went bad in this run (not the training algorithm), probably the sim causing the robot arm to go elbow down and the row detector to see false positives, and the good models were thus overwritten. This run cannot be used in results.
XXXX The trial_success_rate_best_value is fantastic at 1.0, so we will look for the best grasp_success_rate_best_value.
XXXX The trial_success_rate_best_value is fantastic at 1.0, so we will look for the best action_efficiency_best_value. Shapshot chosen: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-28-10-46-31_Sim-Rows-SPOT-Trial-Reward-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth
XXXX Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-28-10-46-31_Sim-Rows-SPOT-Trial-Reward-Training/2020-06-01-00-55-27_Sim-Rows-SPOT-Trial-Reward-Testing
XXXX Random Testing results:
XXXX {'trial_success_rate_best_value': 0.74, 'trial_success_rate_best_index': 2430, 'grasp_success_rate_best_value': 0.6106719367588933, 'grasp_success_rate_best_index': 2430, 'place_success_rate_best_value' : 0.7897042716319824, 'place_success_rate_best_index': 2430, 'action_efficiency_best_value': 0.2, 'action_efficiency_best_index': 2430}
XXXX Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-28-10-46-31_Sim-Rows-SPOT-Trial-Reward-Training
XXXX Training results:
XXXX {'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 15761, 'grasp_success_rate_best_value': 0.9960159362549801, 'grasp_success_rate_best_index': 15763, 'place_success_rate_best_value': 0.7959183673469388, 'place_success_rate_best_index': 12706, 'action_efficiency_best_value': 0.588, 'action_efficiency_best_index': 10820}
SIM STACK - SPOT-Q-MASKED SPOT FRAMEWORK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-31
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-31-17-25-41_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Commit: 12d9481717486342dbfcaff191ddb1428f102406 release tag:v0.16.1
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
> Trial logging complete: 101 --------------------------------------------------------------
> Running two step backprop()
> Primitive confidence scores: 0.580019 (push), 4.184012 (grasp), 8.394482 (place)
> Action: grasp at (0, 73, 119)
> Training loss: 0.248444
> Executing: grasp at (-0.486000, -0.078000, 0.001000) orientation: 0.000000
> gripper position: 0.03083541989326477
> gripper position: 0.026231884956359863
> gripper position: 0.0011520087718963623
> gripper position: -0.023060262203216553
> gripper position: -0.04178208112716675
> gripper position: -0.044988662004470825
> Grasp successful: False
> prev_height: 0.0 max_z: 0.05113248210487194 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
> prev_height: 1.0 max_z: 1.0226496420974387 goal_success: False needed to reset: False max_workspace_height: 0.6 <<<<<<<<<<<
> check_stack() stack_height: 1.0226496420974387 stack matches current goal: False partial_stack_success: False Does the code think a reset is needed: False
> STACK: trial: 101 actions/partial: 3.913793103448276 actions/full stack: 13.485148514851485 (lower is better) Grasp Count: 801, grasp success rate: 0.700374531835206 place_on_stack_rate: 0.6203208556149733 place_attempts: 561 partial_stack_successes: 348 stack_successes: 101 trial_success_rate: 1.0 stack goal: None current_height: 1.0226496420974387
> trial_complete_indices: [ 9. 38. 51. 64. 79. 87. 94. 98. 102. 108. 122. 162.
> 171. 190. 198. 210. 219. 225. 229. 235. 241. 259. 273. 295.
> 305. 311. 331. 354. 360. 368. 380. 386. 404. 419. 427. 451.
> 469. 475. 511. 522. 528. 538. 561. 575. 579. 587. 612. 622.
> 642. 648. 660. 678. 689. 696. 708. 723. 768. 777. 798. 812.
> 821. 827. 833. 839. 850. 858. 885. 895. 911. 931. 937. 958.
> 966. 980. 991. 995. 1002. 1073. 1090. 1103. 1115. 1170. 1189. 1199.
> 1209. 1217. 1225. 1231. 1244. 1261. 1272. 1280. 1287. 1291. 1300. 1309.
> 1329. 1341. 1348. 1354. 1361.]
> Max trial success rate: 1.0, at action iteration: 1358. (total of 1360 actions, max excludes first 1358 actions)
> Max grasp success rate: 0.7008760951188986, at action iteration: 1358. (total of 1360 actions, max excludes first 1358 actions)
> Max place success rate: 0.7625, at action iteration: 1358. (total of 1361 actions, max excludes first 1358 actions)
> Max action efficiency: 0.44624447717231225, at action iteration: 1360. (total of 1361 actions, max excludes first 1358 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-01-08-01_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-01-08-01_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/grasp-success-rate.log.csv
> saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-01-08-01_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/place-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-01-08-01_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/action-efficiency.log.csv
> saving plot: 2020-06-05-01-08-01_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Sim-Stack-SPOT-Trial-Reward-Masked-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-01-08-01_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-01-08-01_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/best_stats.json
> Random Testing Complete! Dir: /home/costar/src/real_good_robot/logs/2020-05-31-17-25-41_Sim-Stack-SPOT-Trial-Reward-Masked-Training/2020-06-05-01-08-01_Sim-Stack-SPOT-Trial-Reward-Masked-Testing
> Random Testing results:
> *********** 100% trial success testing **********
> {'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 1358, 'grasp_success_rate_best_value': 0.7008760951188986, 'grasp_success_rate_best_index': 1358, 'place_success_rate_best_value': 0.7625, 'place_success_rate_best_index': 1358, 'action_efficiency_best_value': 0.44624447717231225, 'action_efficiency_best_index': 1360}
> *********** 100% trial success testing **********
> Training Complete! Dir: /home/costar/src/real_good_robot/logs/2020-05-31-17-25-41_Sim-Stack-SPOT-Trial-Reward-Masked-Training
> Training results:
> {'action_efficiency_best_index': 16175, 'action_efficiency_best_value': 0.564, 'grasp_success_rate_best_index': 13985, 'grasp_success_rate_best_value': 0.9233716475095786, 'place_success_rate_best_index': 19993, 'place_success_rate_best_value': 0.8340807174887892, 'trial_success_rate_best_index': 12586, 'trial_success_rate_best_value': 0.8125}
SIM ROW - SPOT-Q-MASKED SPOT FRAMEWORK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - RANDOM ACTIONS - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-31
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
Creating data logging session:
Commit: 12d9481717486342dbfcaff191ddb1428f102406 release tag:v0.16.1
GPU 1, Tab 1, port 19998, right v-rep window, v-rep tab 8
SIM ROW - Task Progress aka progress only - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-30
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
Creating data logging session: /home/costar/src/real_good_robot/logs/2020-06-01-13-03-15_Sim-Rows-Two-Step-Reward-Training
Commit: 12d9481717486342dbfcaff191ddb1428f102406 release tag:v0.16.1
GPU 1, Tab 1, port 19998, right v-rep window, v-rep tab 8
> Trial logging complete: 101 --------------------------------------------------------------
> Running two step backprop()
> Primitive confidence scores: 5.168270 (push), 7.431992 (grasp), 9.176738 (place)
> Action: grasp at (11, 71, 104)
> Training loss: 0.093161
> Executing: grasp at (-0.516000, -0.082000, 0.051029) orientation: 4.319690
> gripper position: 0.0376565158367157
> gripper position: 0.028479814529418945
> gripper position: 0.006529122591018677
> gripper position: 0.004047483205795288
> Grasp successful: False
> prev_height: 0.0 max_z: 0.0511084382068845 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
> check_row() object_color_sequence length is 0 or 1, so there is nothing to check and it passes automatically
> check_stack() stack_height: 1 stack matches current goal: True partial_stack_success: False Does the code think a reset is needed: False
> STACK: trial: 101 actions/partial: 3.3846153846153846 actions/full stack: 7.111111111111111 (lower is better) Grasp Count: 384, grasp success rate: 0.8385416666666666 place_on_stack_rate: 0.65 place_attempts: 320 partial_stack_successes: 208 stack_successes: 99 trial_success_rate: 0.9801980198019802 stack goal: [0] current_height: 1
> trial_complete_indices: [ 7. 13. 17. 23. 32. 36. 40. 57. 65. 67. 71. 77. 83. 87.
> 91. 95. 97. 101. 126. 131. 138. 144. 150. 156. 160. 162. 166. 170.
> 173. 177. 185. 189. 195. 199. 212. 221. 225. 227. 231. 237. 242. 249.
> 253. 257. 261. 265. 271. 275. 277. 282. 287. 294. 420. 434. 438. 444.
> 449. 451. 459. 463. 471. 478. 482. 486. 493. 499. 503. 509. 515. 522.
> 530. 532. 537. 541. 545. 549. 553. 557. 563. 595. 599. 601. 611. 615.
> 621. 623. 631. 635. 641. 645. 649. 653. 672. 676. 680. 682. 686. 688.
> 693. 697. 703.]
> Max trial success rate: 0.98, at action iteration: 700. (total of 702 actions, max excludes first 700 actions)
> Max grasp success rate: 0.8403141361256544, at action iteration: 700. (total of 702 actions, max excludes first 700 actions)
> Max place success rate: 0.8463949843260188, at action iteration: 700. (total of 703 actions, max excludes first 700 actions)
> Max action efficiency: 0.8657142857142858, at action iteration: 702. (total of 703 actions, max excludes first 700 actions)
> saving plot: 2020-06-05-09-17-42_Sim-Rows-Two-Step-Reward-Testing-Sim-Rows-Two-Step-Reward-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-09-17-42_Sim-Rows-Two-Step-Reward-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-09-17-42_Sim-Rows-Two-Step-Reward-Testing/best_stats.json
> Choosing a snapshot from the following options:{'trial_success_rate_best_value': 0.7482517482517482, 'trial_success_rate_best_index': 19558, 'grasp_success_rate_best_value': 0.8122743682310469, 'grasp_success_rate_best_index': 19556, 'place_success_rate_best_value': 0.8878923766816144, 'place_success_rate_best_index': 19597, 'action_efficiency_best_value': 1.344, 'action_efficiency_best_index': 19459}
> Evaluating trial_success_rate_best_value
> Shapshot chosen: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-01-13-03-15_Sim-Rows-Two-Step-Reward-Training/models/snapshot.reinforcement_place_success_rate_best_value.pth
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-01-13-03-15_Sim-Rows-Two-Step-Reward-Training/2020-06-05-09-17-42_Sim-Rows-Two-Step-Reward-Testing
> Random Testing results:
> {'trial_success_rate_best_value': 0.98, 'trial_success_rate_best_index': 700, 'grasp_success_rate_best_value': 0.8403141361256544, 'grasp_success_rate_best_index': 700, 'place_success_rate_best_value': 0.8463949843260188, 'place_success_rate_best_index': 700, 'action_efficiency_best_value': 0.8657142857142858, 'action_efficiency_best_index': 702}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-01-13-03-15_Sim-Rows-Two-Step-Reward-Training
> Training results:
> {'trial_success_rate_best_value': 0.7482517482517482, 'trial_success_rate_best_index': 19558, 'grasp_success_rate_best_value': 0.8122743682310469, 'grasp_success_rate_best_index': 19556, 'place_success_rate_best_value': 0.8878923766816144, 'place_success_rate_best_index': 19597, 'action_efficiency_best_value': 1.344, 'action_efficiency_best_index': 19459}
=============================================================<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=============================================================<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
SIM TO REAL ROBOT TEST COMMANDS 2020-06-05
=============================================================<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=============================================================<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
taskset 0x00000FFF roslaunch openni2_launch openni2.launch depth_registration:=true num_worker_threads:=4 color_depth_synchronization:=true
SIM TO REAL TESTING ROW - TEST - Task Progress aka progress only - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-30 - test on costar 2020-06-05
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --check_row --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --save_visualizations --random_actions --snapshot_file /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-01-13-03-15_Sim-Rows-Two-Step-Reward-Training/models/snapshot.reinforcement_place_success_rate_best_value.pth
RESUME: export CUDA_VISIBLE_DEVICES="0" && python3 main.py --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --check_row --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --save_visualizations --random_actions --snapshot_file /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-01-13-03-15_Sim-Rows-Two-Step-Reward-Training/models/snapshot.reinforcement_place_success_rate_best_value.pth --resume '/home/costar/src/real_good_robot/logs/2020-06-05-16-06-34_Real-Rows-Two-Step-Reward-Testing'
/media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-01-13-03-15_Sim-Rows-Two-Step-Reward-Training/models/snapshot.reinforcement_place_success_rate_best_value.pth
Commit: cb55d6b8a6e8abfb1185dd945c0689ddf40546b0
FIRST TESTING RUN, WENT DOWN DUE TO ROW CHECK BUG, resume didn't work. Testing dir: '/home/costar/src/real_good_robot/logs/2020-06-05-16-06-34_Real-Rows-Two-Step-Reward-Testing'
Creating data logging session: '/home/costar/src/real_good_robot/logs/2020-06-05-17-00-01_Real-Rows-Two-Step-Reward-Testing'
Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-17-00-01_Real-Rows-Two-Step-Reward-Testing
Rows-Two-Step-Reward-Testing
Testing results:
{'trial_success_rate_best_value': 0.9, 'trial_success_rate_best_index': 49, 'grasp_success_rate_best_value': 0.7586206896551724, 'grasp_success_rate_best_index': 50, 'place_success_rate_best_value': 1.0, 'place_success_rate_best_index': 49, 'action_efficiency_best_value': 1.469387755102041, 'action_efficiency_best_index': 51}
SIM TO REAL TESTING STACK - TEST - SPOT-Q-MASKED - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-13 - test on costar 2020-06-05
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --save_visualizations --random_actions --snapshot_file /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-51-39_Sim-Stack-SPOT-Trial-Reward-Masked-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth
/media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-51-39_Sim-Stack-SPOT-Trial-Reward-Masked-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth
Commit: cb55d6b8a6e8abfb1185dd945c0689ddf40546b0
Testing results, note there was an inverse kinematics problem for one trial:
Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-17-25-42_Real-Stack-SPOT-Trial-Reward-Masked-Testing
2020-06-05-17-25-42_Real-Stack-SPOT-Trial-Reward-Masked-Testing
{'trial_success_rate_best_value': 0.8, 'trial_success_rate_best_index': 160, 'grasp_success_rate_best_value': 0.6185567010309279, 'grasp_success_rate_best_index': 160, 'place_success_rate_best_value': 0.75, 'place_success_rate_best_index': 160, 'action_efficiency_best_value': 0.3375, 'action_efficiency_best_index': 162}
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-18-28-46_Real-Stack-SPOT-Trial-Reward-Masked-Testing
Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-18-28-46_Real-Stack-SPOT-Trial-Reward-Masked-Testing
Testing results:
{'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 108, 'grasp_success_rate_best_value': 0.703125, 'grasp_success_rate_best_index': 108, 'place_success_rate_best_value': 0.8888888888888888, 'place_success_rate_best_index': 110, 'action_efficiency_best_value': 0.6111111111111112, 'action_efficiency_best_index': 110}
SIM TO REAL STACK - TEST - Task Progress aka progress only - RANDOM ACTIONS - REWARD SCHEDULE 0.1, 1, 1 - workstation named spot 2020-05-30 - test on costar 2020-06-05
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --save_visualizations --random_actions --snapshot_file /home/costar/src/real_good_robot/logs/2020-05-30-13-09-38_Sim-Stack-Two-Step-Reward-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth
"/home/ahundt/src/real_good_robot/logs/2020-05-30-13-09-38_Sim-Stack-Two-Step-Reward-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth"
2020-05-30-13-09-38_Sim-Stack-Two-Step-Reward-Training-Sim-Stack-Two-Step-Reward-Training_success_plot
SIM Random Testing Complete! Dir: /home/ahundt/src/real_good_robot/logs/2020-05-30-13-09-38_Sim-Stack-Two-Step-Reward-Training/2020-06-03-03-08-00_Sim-Stack-Two-Step-Reward-Testing
SIM Random Testing results:{'trial_success_rate_best_value': 0.98, 'trial_success_rate_best_index': 1234, 'grasp_success_rate_best_value': 0.8335854765506808, 'grasp_success_rate_best_index': 1235, 'place_success_rate_best_value': 0.7426086956521739, 'place_success_rate_best_index': 1236, 'action_efficiency_best_value': 0.5153970826580226, 'action_efficiency_best_index': 1236}
Commit: cb55d6b8a6e8abfb1185dd945c0689ddf40546b0
TODO(ahundt) Around iteration 57 and trial 4, a stack of 4 was created successfully, but the detector thought it was slightly shorter than 4 blocks tall. Don't the extra actions on that trial, mark it successful, and give the model the correct score. This happened again around 77, also a successful stack
STACK: trial: 11 actions/partial: 3.3333333333333335 actions/full stack: 13.333333333333334 (lower is better) Grasp Count: 76, grasp success rate: 0.5789473684210527 place_on_stack_rate: 0.8181818181818182 place_attempts: 44 partial_stack_successes: 36 stack_successes: 9 trial_success_rate: 0.8181818181818182 stack goal: None current_height: 0.6913090702201624
Move to Home Position Complete
Move to Home Position Complete
trial_complete_indices: [ 6. 16. 22. 30. 57. 64. 78. 84. 97. 107. 119.]
Max trial success rate: 0.8, at action iteration: 116. (total of 118 actions, max excludes first 116 actions)
Max grasp success rate: 0.581081081081081, at action iteration: 116. (total of 118 actions, max excludes first 116 actions)
Max place success rate: 0.8837209302325582, at action iteration: 116. (total of 119 actions, max excludes first 116 actions)
Max action efficiency: 0.5172413793103449, at action iteration: 118. (total of 119 actions, max excludes first 116 actions)
saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-19-42-28_Real-Stack-Two-Step-Reward-Testing/transitions/trial-success-rate.log.csv
saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-19-42-28_Real-Stack-Two-Step-Reward-Testing/transitions/grasp-success-rate.log.csv
saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-19-42-28_Real-Stack-Two-Step-Reward-Testing/transitions/place-success-rate.log.csv
saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-19-42-28_Real-Stack-Two-Step-Reward-Testing/transitions/action-efficiency.log.csv
saving plot: 2020-06-05-19-42-28_Real-Stack-Two-Step-Reward-Testing-Real-Stack-Two-Step-Reward-Testing_success_plot.png
saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-19-42-28_Real-Stack-Two-Step-Reward-Testing/data/best_stats.json
saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-19-42-28_Real-Stack-Two-Step-Reward-Testing/best_stats.json
Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-19-42-28_Real-Stack-Two-Step-Reward-Testing
Testing results:
{'trial_success_rate_best_value': 0.8, 'trial_success_rate_best_index': 116, 'grasp_success_rate_best_value': 0.581081081081081, 'grasp_success_rate_best_index': 116, 'place_success_rate_best_value': 0.8837209302325582, 'place_success_rate_best_index': 116, 'action_efficiency_best_value': 0.5172413793103449, 'action_efficiency_best_index': 118}
After manual correction of trial successes based on the video and actual stack heights (grasp/place may still need a tiny adjustment):
Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-19-42-28_Real-Stack-Two-Step-Reward-Testing
Testing results:
{'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 116, 'grasp_success_rate_best_value': 0.581081081081081, 'grasp_success_rate_best_index': 116, 'place_success_rate_best_value': 0.8837209302325582, 'place_success_rate_best_index': 116, 'action_efficiency_best_value': 0.5172413793103449, 'action_efficiency_best_index': 118}
SIM TO REAL ROW - TEST - Task Progress aka progress only - REWARD SCHEDULE 0.1, 1, 1 - workstation named spot 2020-05-30 - test on costar 2020-06-05
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --check_row --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --random_actions --save_visualizations --snapshot_file /home/costar/src/real_good_robot/logs/2020-05-30-13-10-52_Sim-Rows-Two-Step-Reward-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth
Random Testing Complete! Dir: /home/ahundt/src/real_good_robot/logs/2020-05-30-13-10-52_Sim-Rows-Two-Step-Reward-Training/2020-06-03-07-00-00_Sim-Rows-Two-Step-Reward-Testing
SIM Random Testing results: {'trial_success_rate_best_value': 0.97, 'trial_success_rate_best_index': 1120, 'grasp_success_rate_best_value': 0.823051948051948, 'grasp_success_rate_best_index': 1120, 'place_success_rate_best_value': 0.8950495049504951, 'place_success_rate_best_index': 1120, 'action_efficiency_best_value': 0.5303571428571429, 'action_efficiency_best_index': 1122}
SIM Training Complete! Dir: /home/ahundt/src/real_good_robot/logs/2020-05-30-13-10-52_Sim-Rows-Two-Step-Reward-Training
"/home/costar/src/real_good_robot/logs/2020-05-30-13-10-52_Sim-Rows-Two-Step-Reward-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth"
"/home/ahundt/src/real_good_robot/logs/2020-05-30-13-10-52_Sim-Rows-Two-Step-Reward-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth"
Commit: cb55d6b8a6e8abfb1185dd945c0689ddf40546b0
TODO(ahundt) manually verify the log and number of successes here, I don't think I actually saw it fail
Creating data logging session: '/home/costar/src/real_good_robot/logs/2020-06-05-19-14-58_Real-Rows-Two-Step-Reward-Testing'
STACK: trial: 11 actions/partial: 3.0 actions/full stack: 5.181818181818182 (lower is better) Grasp Count: 33, grasp success rate: 0.7272727272727273 place_on_stack_rate: 0.7916666666666666 place_attempts: 24 partial_stack_successes: 19 stack_successes: 11 trial_success_rate: 1.0 stack goal: None current_height: 2.2254545454545456
trial_complete_indices: [ 5. 11. 15. 18. 24. 31. 35. 40. 50. 52. 56.]
Max trial success rate: 0.9, at action iteration: 53. (total of 55 actions, max excludes first 53 actions)
Max grasp success rate: 0.7741935483870968, at action iteration: 54. (total of 55 actions, max excludes first 53 actions)
Max place success rate: 0.7916666666666666, at action iteration: 55. (total of 56 actions, max excludes first 53 actions)
Max action efficiency: 1.2452830188679245, at action iteration: 55. (total of 56 actions, max excludes first 53 actions)
Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-05-19-14-58_Real-Rows-Two-Step-Reward-Testing
Testing results:
{'trial_success_rate_best_value': 0.9, 'trial_success_rate_best_index': 53, 'grasp_success_rate_best_value': 0.7741935483870968, 'grasp_success_rate_best_index': 54, 'place_success_rate_best_value': 0.7916666666666666, 'place_success_rate_best_index': 55, 'action_efficiency_best_value': 1.2452830188679245, 'action_efficiency_best_index': 55}
SIM TO REAL STACK - TEST - Task Progress SPOT-Q MASKED - RANDOM ACTIONS - REWARD SCHEDULE 0.1, 1, 1 - workstation named spot 2020-06-03 - test on costar 2020-06-07
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --save_visualizations --random_actions --common_sense --snapshot_file "/home/costar/src/real_good_robot/logs/2020-06-03-11-44-02_Sim-Stack-Two-Step-Reward-Masked-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth"
IK PROBLEM, run went down. Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-16-21-50_Real-Stack-Two-Step-Reward-Masked-Testing
Pre-trained model snapshot loaded from: /home/costar/src/real_good_robot/logs/2020-06-03-11-44-02_Sim-Stack-Two-Step-Reward-Masked-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-16-31-33_Real-Stack-Two-Step-Reward-Masked-Testing
SIM on spot workstation Creating data logging session: /home/ahundt/src/real_good_robot/logs/2020-06-03-11-44-02_Sim-Stack-Two-Step-Reward-Masked-Training
SIM Commit: 12d9481717486342dbfcaff191ddb1428f102406 release tag:v0.16.1
SIM GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
SIM export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --common_sense
SIM Random Testing Complete! Dir: /home/ahundt/src/real_good_robot/logs/2020-06-03-11-44-02_Sim-Stack-Two-Step-Reward-Masked-Training/2020-06-07-06-26-25_Sim-Stack-Two-Step-Reward-Masked-Testing
SIM Random Testing results: {'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 1351, 'grasp_success_rate_best_value': 0.7558746736292428, 'grasp_success_rate_best_index': 1351, 'place_success_rate_best_value': 0.757679180887372, 'place_success_rate_best_index': 1351, 'action_efficiency_best_value': 0.44855662472242785, 'action_efficiency_best_index': 1353}
"snapshot_file": "/home/ahundt/src/real_good_robot/logs/2020-06-03-11-44-02_Sim-Stack-Two-Step-Reward-Masked-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth"
> trial_complete_indices: [ 9. 20. 27. 33. 42. 70. 79. 92. 98. 104. 111.]
> Max trial success rate: 1.0, at action iteration: 108. (total of 110 actions, max excludes first 108 actions)
> Max grasp success rate: 0.6666666666666666, at action iteration: 109. (total of 110 actions, max excludes first 108 actions)
> Max place success rate: 0.9090909090909091, at action iteration: 110. (total of 111 actions, max excludes first 108 actions)
> Max action efficiency: 0.6111111111111112, at action iteration: 110. (total of 111 actions, max excludes first 108 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-16-31-33_Real-Stack-Two-Step-Reward-Masked-Testing/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-16-31-33_Real-Stack-Two-Step-Reward-Masked-Testing/transitions/grasp-success-rate.log.csv
> saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-16-31-33_Real-Stack-Two-Step-Reward-Masked-Testing/transitions/place-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-16-31-33_Real-Stack-Two-Step-Reward-Masked-Testing/transitions/action-efficiency.log.csv
> saving plot: 2020-06-07-16-31-33_Real-Stack-Two-Step-Reward-Masked-Testing-Real-Stack-Two-Step-Reward-Masked-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-16-31-33_Real-Stack-Two-Step-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-16-31-33_Real-Stack-Two-Step-Reward-Masked-Testing/best_stats.json
> Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-16-31-33_Real-Stack-Two-Step-Reward-Masked-Testing
> Testing results:
> {'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 108, 'grasp_success_rate_best_value': 0.6666666666666666, 'grasp_success_rate_best_index': 109, 'place_success_rate_best_value': 0.9090909090909091, 'place_success_rate_best_index': 110, 'action_efficiency_best_value': 0.6111111111111112, 'action_efficiency_best_index': 110}
SIM TO REAL ROW - TEST - Task Progress SPOT-Q MASKED - REWARD SCHEDULE 0.1, 1, 1 - workstation named spot 2020-06-03 - test on costar 2020-06-07
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --check_row --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --random_actions --save_visualizations --common_sense --snapshot_file "/home/costar/src/real_good_robot/logs/2020-06-03-12-05-28_Sim-Rows-Two-Step-Reward-Masked-Training/models/snapshot.reinforcement_trial_success_rate_best_value.pth"
SIM export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --common_sense
SIM on spot workstation Creating data logging session: /home/ahundt/src/real_good_robot/logs/2020-06-03-12-05-28_Sim-Rows-Two-Step-Reward-Masked-Training
SIM Commit: 12d9481717486342dbfcaff191ddb1428f102406 release tag:v0.16.1
SIM GPU 1, Tab 1, port 19998, center left v-rep window, v-rep tab 8
SIM Random Testing Complete! Dir: /home/ahundt/src/real_good_robot/logs/2020-06-03-12-05-28_Sim-Rows-Two-Step-Reward-Masked-Training/2020-06-06-21-34-07_Sim-Rows-Two-Step-Reward-Masked-Testing
SIM Random Testing results: {'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 667, 'grasp_success_rate_best_value': 0.850415512465374, 'grasp_success_rate_best_index': 667, 'place_success_rate_best_value': 0.7752442996742671, 'place_success_rate_best_index': 667, 'action_efficiency_best_value': 0.9265367316341829, 'action_efficiency_best_index': 667}
"snapshot_file": "/home/ahundt/src/real_good_robot/logs/2020-06-03-12-05-28_Sim-Rows-Two-Step-Reward-Masked-Training/models/snapshot.reinforcement_trial_success_rate_best_value.pth"
Pre-trained model snapshot loaded from: /home/costar/src/real_good_robot/logs/2020-06-03-12-05-28_Sim-Rows-Two-Step-Reward-Masked-Training/models/snapshot.reinforcement_trial_success_rate_best_value.pth
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-17-19-34_Real-Rows-Two-Step-Reward-Masked-Testing
Note on trial 8 or 9 a row was completed correctly, but the sensor didn't pick it up, so I slid the blocks into the middle of the space while maintaining the exact relative position so it would be scored correctly by the row detector (one extra action took place).
> STACK: trial: 11 actions/partial: 3.0714285714285716 actions/full stack: 7.818181818181818 (lower is better) Grasp Count: 52, grasp success rate: 0.6538461538461539 place_on_stack_rate: 0.8235294117647058 place_attempts: 34 partial_stack_successes: 28 stack_successes: 11 trial_success_rate: 1.0 stack goal: None current_height: 0.3236363636363636
> Move to Home Position Complete
> Move to Home Position Complete
> trial_complete_indices: [ 7. 9. 17. 21. 30. 50. 54. 59. 69. 73. 85.]
> Max trial success rate: 1.0, at action iteration: 82. (total of 84 actions, max excludes first 82 actions)
> Max grasp success rate: 0.68, at action iteration: 83. (total of 84 actions, max excludes first 82 actions)
> Max place success rate: 0.8181818181818182, at action iteration: 83. (total of 84 actions, max excludes first 82 actions)
> Max action efficiency: 0.8780487804878049, at action iteration: 84. (total of 85 actions, max excludes first 82 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-17-19-34_Real-Rows-Two-Step-Reward-Masked-Testing/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-17-19-34_Real-Rows-Two-Step-Reward-Masked-Testing/transitions/grasp-success-rate.log.csv
> saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-17-19-34_Real-Rows-Two-Step-Reward-Masked-Testing/transitions/place-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-17-19-34_Real-Rows-Two-Step-Reward-Masked-Testing/transitions/action-efficiency.log.csv
> saving plot: 2020-06-07-17-19-34_Real-Rows-Two-Step-Reward-Masked-Testing-Real-Rows-Two-Step-Reward-Masked-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-17-19-34_Real-Rows-Two-Step-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-17-19-34_Real-Rows-Two-Step-Reward-Masked-Testing/best_stats.json
> Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-17-19-34_Real-Rows-Two-Step-Reward-Masked-Testing
> Testing results:
> {'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 82, 'grasp_success_rate_best_value': 0.68, 'grasp_success_rate_best_index': 83, 'place_success_rate_best_value': 0.8181818181818182, 'place_success_rate_best_index': 83, 'action_efficiency_best_value': 0.8780487804878049, 'action_efficiency_best_index': 84}
SIM TO REAL ROW - SPOT-Q-MASKED - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - RANDOM ACTIONS - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-05-13 - test on costar 2020-06-07
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --check_row --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --random_actions --save_visualizations --common_sense --trial_reward --snapshot_file /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-21-00_Sim-Rows-SPOT-Trial-Reward-Masked-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth
Note: this run might have been trained be back from when there were a few bugs affecting performance, so ideally we should redo training before doing this test, but we have limited access to the real robot so we are going with the trained model we've got.
One trial failed for real here, the robot placed an object so it fell out of the scene.
SIM export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
SIM SNAPSHOT FILE ± '/media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-21-00_Sim-Rows-SPOT-Trial-Reward-Masked-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth'
Pre-trained model snapshot loaded from: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-05-13-12-21-00_Sim-Rows-SPOT-Trial-Reward-Masked-Training/models/snapshot.reinforcement_action_efficiency_best_value.pth
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-18-10-48_Real-Rows-SPOT-Trial-Reward-Masked-Testing
> STACK: trial: 11 actions/partial: 2.962962962962963 actions/full stack: 8.0 (lower is better) Grasp Count: 44, grasp success rate: 0.8181818181818182 place_on_stack_rate: 0.75 place_attempts: 36 partial_stack_successes: 27 stack_successes: 10 trial_success_rate: 0.9090909090909091 stack goal: None current_height: 0.7272727272727273
> Move to Home Position Complete
> Move to Home Position Complete
> trial_complete_indices: [ 6. 15. 21. 28. 32. 38. 42. 48. 63. 71. 79.]
> Max trial success rate: 0.9, at action iteration: 76. (total of 78 actions, max excludes first 76 actions)
> Max grasp success rate: 0.8333333333333334, at action iteration: 76. (total of 78 actions, max excludes first 76 actions)
> Max place success rate: 0.7428571428571429, at action iteration: 77. (total of 78 actions, max excludes first 76 actions)
> Max action efficiency: 0.868421052631579, at action iteration: 78. (total of 79 actions, max excludes first 76 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-18-10-48_Real-Rows-SPOT-Trial-Reward-Masked-Testing/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-18-10-48_Real-Rows-SPOT-Trial-Reward-Masked-Testing/transitions/grasp-success-rate.log.csv
> saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-18-10-48_Real-Rows-SPOT-Trial-Reward-Masked-Testing/transitions/place-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-18-10-48_Real-Rows-SPOT-Trial-Reward-Masked-Testing/transitions/action-efficiency.log.csv
> saving plot: 2020-06-07-18-10-48_Real-Rows-SPOT-Trial-Reward-Masked-Testing-Real-Rows-SPOT-Trial-Reward-Masked-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-18-10-48_Real-Rows-SPOT-Trial-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-18-10-48_Real-Rows-SPOT-Trial-Reward-Masked-Testing/best_stats.json
> Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-18-10-48_Real-Rows-SPOT-Trial-Reward-Masked-Testing
> Testing results:
> {'trial_success_rate_best_value': 0.9, 'trial_success_rate_best_index': 76, 'grasp_success_rate_best_value': 0.8333333333333334, 'grasp_success_rate_best_index': 76, 'place_success_rate_best_value': 0.7428571428571429, 'place_success_rate_best_index': 77, 'action_efficiency_best_value': 0.868421052631579, 'action_efficiency_best_index': 78}
=============================================================<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=============================================================<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=============================================================<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=============================================================<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
SIM STACK - SPOT-Q-MASKED SPOT FRAMEWORK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-06-07
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training
RESUME: ± export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Commit: 3124bb5ed5db5b21a29b95b958962a1f4a1388e7 release tag:v0.16.3
RESUME Commit: a923d4d02f13998824a81fd53e8716f07ed8ba38 a couple commits after v0.16.3
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
Note the sim bullet physics encountered an error where the gripper shook continuously with a velocity error, and the gripper didn't respond around 9220-9600 actions in, so we stopped the sim, cut off the actions before 9157, and resumed.
Trial logging complete: 101 --------------------------------------------------------------
Running two step backprop()
Primitive confidence scores: 1.157308 (push), 6.873050 (grasp), 4.287281 (place)
Action: grasp at (2, 152, 120)
Training loss: 3.098692
Executing: grasp at (-0.484000, 0.080000, 0.050979) orientation: 0.785398
gripper position: 0.05314567685127258
gripper position: 0.03554174304008484
gripper position: 0.012224435806274414
gripper position: 0.005477994680404663
gripper position: 0.004845559597015381
gripper position: 0.0046235620975494385
gripper position: 0.0028651803731918335
Grasp successful: True
prev_height: 0.0 max_z: 0.051104635732064876 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
prev_height: 1.0 max_z: 1.0220927146412975 goal_success: False needed to reset: False max_workspace_height: 0.6 <<<<<<<<<<<
check_stack() stack_height: 1.0220927146412975 stack matches current goal: False partial_stack_success: False Does the code think a reset is needed: False
STACK: trial: 101 actions/partial: 3.9243697478991595 actions/full stack: 14.443298969072165 (lower is better) Grasp Count: 789, grasp success rate: 0.7807351077313055 place_on_stack_rate: 0.5833333333333334 place_attempts: 612 partial_stack_successes: 357 stack_successes: 97 trial_success_rate: 0.9603960396039604 stack goal: None current_height: 1.0220927146412975
trial_complete_indices: [ 10. 16. 24. 30. 36. 43. 52. 64. 76. 84. 90. 97.
101. 107. 115. 132. 138. 146. 164. 168. 178. 184. 202. 224.
229. 239. 252. 258. 266. 272. 282. 288. 326. 349. 358. 374.
378. 401. 407. 421. 430. 436. 444. 450. 467. 478. 490. 495.
534. 544. 552. 563. 572. 584. 597. 614. 622. 628. 812. 832.
847. 860. 872. 881. 899. 953. 959. 970. 978. 988. 1000. 1011.
1017. 1025. 1049. 1053. 1058. 1064. 1069. 1082. 1088. 1094. 1197. 1214.
1225. 1233. 1242. 1249. 1266. 1273. 1287. 1295. 1303. 1321. 1327. 1334.
1361. 1370. 1379. 1387. 1400.]
Max trial success rate: 0.96, at action iteration: 1397. (total of 1399 actions, max excludes first 1397 actions)
Max grasp success rate: 0.7814485387547649, at action iteration: 1398. (total of 1399 actions, max excludes first 1397 actions)
Max place success rate: 0.7892156862745098, at action iteration: 1399. (total of 1400 actions, max excludes first 1397 actions)
Max action efficiency: 0.41660701503221187, at action iteration: 1399. (total of 1400 actions, max excludes first 1397 actions)
saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-12-17-08-24_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/trial-success-rate.log.csv
saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-12-17-08-24_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/grasp-success-rate.log.csv
saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-12-17-08-24_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/place-success-rate.log.csv
saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-12-17-08-24_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/action-efficiency.log.csv
saving plot: 2020-06-12-17-08-24_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Sim-Stack-SPOT-Trial-Reward-Masked-Testing_success_plot.png
saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-12-17-08-24_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/data/best_stats.json
saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-12-17-08-24_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/best_stats.json
Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training/2020-06-12-11-00-59_Sim-Stack-SPOT-Trial-Reward-Masked-Testing
Random Testing results:
*********** 100% trial success testing **********
{'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 1204, 'grasp_success_rate_best_value': 0.8595679012345679, 'grasp_success_rate_best_index': 1204, 'place_success_rate_best_value': 0.7612208258527827, 'place_success_rate_best_index': 1206, 'action_efficiency_best_value': 0.5083056478405316, 'action_efficiency_best_index': 1206}
*********** 100% trial success testing **********
Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Training results:
{'action_efficiency_best_index': 13181, 'action_efficiency_best_value': 0.66, 'grasp_success_rate_best_index': 13210, 'grasp_success_rate_best_value': 0.9233716475095786, 'place_success_rate_best_index': 10312, 'place_success_rate_best_value': 0.8626609442060086, 'trial_success_rate_best_index': 9059, 'trial_success_rate_best_value': 0.8928571428571429}
SIM ROW - SPOT-Q-MASKED SPOT FRAMEWORK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - RANDOM ACTIONS - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-06-07
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
RUN CANCELED, LABELING PROBLEM? not sure - Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-21-42-50_Sim-Rows-SPOT-Trial-Reward-Masked-Training
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-09-15-18-36_Sim-Rows-SPOT-Trial-Reward-Masked-Training
Commit: a923d4d02f13998824a81fd53e8716f07ed8ba38 a couple commits after v0.16.3
GPU 1, Tab 1, port 19998, right v-rep window, v-rep tab 8
> Testing results:
> '/media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-09-15-18-36_Sim-Rows-SPOT-Trial-Reward-Masked-Training/2020-06-13-16-58-42_Sim-Rows-SPOT-Trial-Reward-Masked-Testing/best_stats.json'
> {"action_efficiency_best_index": 1989, "action_efficiency_best_value": 0.29894313034725717, "grasp_success_rate_best_index": 1987, "grasp_success_rate_best_value": 0.5405405405405406, "place_success_rate_best_index": null, "place_success_rate_best_value": -Infinity, "trial_success_rate_best_index": 1987, "trial_success_rate_best_value": 0.91}
==================================================================
==================================================================
Post "good robot!" paper
SIM STACK - SPOT STANDARD - TRIAL REWARD - RANDOM ACTIONS - SIM STACK - TRIAL REWARD Task Progress SPOT-Q MASKED - RANDOM ACTIONS - REWARD SCHEDULE 0.1, 1, 1 - EFFICIENTNET 1 dilation - workstation named spot 2020-06-26
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --trial_reward --common_sense --nn efficientnet --num_dilation 1
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-28-13-08-38_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Commit: b8798ea07167dff5c8fcf5cd11c3ace2b4a0e22d
GPU 0, Tab 0, port 19999, left center v-rep window, v-rep tab 7
> Trial logging complete: 101 --------------------------------------------------------------
> Running two step backprop()
> Primitive confidence scores: 1.279221 (push), 4.319913 (grasp), 7.368117 (place)
> Action: grasp at (7, 179, 92)
> Training loss: 1.625246
> Executing: grasp at (-0.540000, 0.134000, 0.000999) orientation: 2.748894
> gripper position: 0.05270123481750488
> gripper position: 0.034777820110321045
> gripper position: 0.03336215019226074
> Grasp successful: False
> prev_height: 0.0 max_z: 0.051148986974654656 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
> prev_height: 1.0 max_z: 1.022979739493093 goal_success: False needed to reset: False max_workspace_height: 0.6 <<<<<<<<<<<
> check_stack() stack_height: 1.022979739493093 stack matches current goal: False partial_stack_success: False Does the code think a reset is needed: False
> STACK: trial: 101 actions/partial: 2.802325581395349 actions/full stack: 9.737373737373737 (lower is better) Grasp Count: 517, grasp success rate: 0.8433268858800773 place_on_stack_rate: 0.7889908256880734 place_attempts: 436 partial_stack_successes: 344 stack_successes: 99 trial_success_rate: 0.9801980198019802 stack goal: None current_height: 1.022979739493093
> trial_complete_indices: [ 8. 15. 26. 36. 43. 49. 57. 65. 73. 79. 83. 91. 112. 118.
> 134. 140. 151. 157. 163. 167. 176. 183. 203. 218. 224. 245. 253. 259.
> 268. 278. 363. 373. 379. 393. 399. 417. 423. 429. 441. 449. 456. 463.
> 473. 479. 486. 527. 533. 547. 553. 560. 566. 572. 578. 585. 591. 598.
> 604. 614. 620. 634. 640. 646. 654. 660. 677. 681. 687. 707. 713. 719.
> 725. 729. 741. 745. 753. 759. 763. 775. 781. 793. 812. 819. 823. 830.
> 842. 848. 854. 859. 874. 888. 894. 900. 906. 915. 919. 925. 931. 937.
> 953. 959. 963.]
> Max trial success rate: 0.97, at action iteration: 960. (total of 962 actions, max excludes first 960 actions)
> Max grasp success rate: 0.8466019417475729, at action iteration: 961. (total of 962 actions, max excludes first 960 actions)
> Max place success rate: 0.8389261744966443, at action iteration: 962. (total of 963 actions, max excludes first 960 actions)
> Max action efficiency: 0.61875, at action iteration: 962. (total of 963 actions, max excludes first 960 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-02-04-28-55_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-02-04-28-55_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/grasp-success-rate.log.csv
> saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-02-04-28-55_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/place-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-02-04-28-55_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/action-efficiency.log.csv
> saving plot: 2020-07-02-04-28-55_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Sim-Stack-SPOT-Trial-Reward-Masked-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-02-04-28-55_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-02-04-28-55_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/best_stats.json
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-28-13-08-38_Sim-Stack-SPOT-Trial-Reward-Masked-Training/2020-07-02-00-39-17_Sim-Stack-SPOT-Trial-Reward-Masked-Testing
> Random Testing results:
> {'trial_success_rate_best_value': 0.99, 'trial_success_rate_best_index': 912, 'grasp_success_rate_best_value': 0.8556910569105691, 'grasp_success_rate_best_index': 912, 'place_success_rate_best_value': 0.8598574821852731, 'place_success_rate_best_index': 912, 'action_efficiency_best_value': 0.6776315789473685, 'action_efficiency_best_index': 914}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-28-13-08-38_Sim-Stack-SPOT-Trial-Reward-Masked-Training
> Training results:
> {'trial_success_rate_best_value': 0.918918918918919, 'trial_success_rate_best_index': 18331, 'grasp_success_rate_best_value': 0.9192307692307692, 'grasp_success_rate_best_index': 12025, 'place_success_rate_best_value': 0.9361702127659575, 'place_success_rate_best_index': 16201, 'action_efficiency_best_value': 0.876, 'action_efficiency_best_index': 15505}
ANY OBJECT SIM STACK - Trial Reward SPOT-Q MASKED - RANDOM ACTIONS - REWARD SCHEDULE 0.1, 1, 1 - EFFICIENTNET 1 dilation - workstation named costar 2020-06-28
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/toys --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --check_z_height --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --common_sense --trial_reward --nn efficientnet --num_dilation 1
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-02-18-08-20_Sim-Stack-SPOT-Trial-Reward-Masked-Training
SIM FAILURE 2: Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-30-11-12-07_Sim-Stack-SPOT-Trial-Reward-Masked-Training
MAJOR SIMULATOR FAILURE, CANCELLED AND RESTARTED WITH SESSION ABOVE: Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-28-13-03-27_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Commit: b8798ea07167dff5c8fcf5cd11c3ace2b4a0e22d
GPU 1, Tab 1, port 19998, right center v-rep window, v-rep tab 8
SIM STACK - SPOT STANDARD - TRIAL REWARD - RANDOM ACTIONS - SIM STACK - TRIAL REWARD Task Progress SPOT-Q MASKED - RANDOM ACTIONS - REWARD SCHEDULE 0.1, 1, 1 - EFFICIENTNET 1 dilation - workstation named spot 2020-07-02
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --trial_reward --common_sense --nn efficientnet --num_dilation 1
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-02-18-06-30_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Commit: b8798ea07167dff5c8fcf5cd11c3ace2b4a0e22d
GPU 0, Tab 0, port 19999, left center v-rep window, v-rep tab 7
> Trial logging complete: 101 --------------------------------------------------------------
> Running two step backprop()
> Primitive confidence scores: 1.402481 (push), 7.382995 (grasp), 8.391286 (place)
> Action: grasp at (2, 68, 123)
> Training loss: 0.443942
> Executing: grasp at (-0.478000, -0.088000, 0.001001) orientation: 0.785398
> gripper position: 0.02947103977203369
> gripper position: 0.0259285569190979
> gripper position: 0.0009321868419647217
> gripper position: -0.023242294788360596
> gripper position: -0.041788965463638306
> Grasp successful: False
> prev_height: 0.0 max_z: 0.05113309271598533 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
> prev_height: 1.0 max_z: 1.0226618543197066 goal_success: False needed to reset: False max_workspace_height: 0.6 <<<<<<<<<<<
> check_stack() stack_height: 1.0226618543197066 stack matches current goal: False partial_stack_success: False Does the code think a reset is needed: False
> STACK: trial: 101 actions/partial: 2.974285714285714 actions/full stack: 10.515151515151516 (lower is better) Grasp Count: 561, grasp success rate: 0.857397504456328 place_on_stack_rate: 0.7291666666666666 place_attempts: 480 partial_stack_successes: 350 stack_successes: 99 trial_success_rate: 0.9801980198019802 stack goal: None current_height: 1.0226618543197066
> trial_complete_indices: [ 15. 26. 57. 65. 71. 77. 85. 96. 102. 111. 124. 130.
> 139. 147. 151. 162. 168. 181. 189. 196. 200. 246. 258. 277.
> 285. 299. 307. 313. 320. 331. 338. 344. 358. 366. 376. 385.
> 391. 408. 418. 424. 430. 447. 463. 471. 481. 493. 503. 509.
> 518. 525. 531. 540. 547. 555. 562. 573. 579. 589. 595. 605.
> 611. 617. 627. 639. 649. 655. 667. 683. 689. 696. 719. 727.
> 771. 783. 812. 816. 827. 841. 848. 869. 877. 883. 889. 895.
> 906. 912. 921. 928. 932. 946. 961. 969. 976. 982. 988. 994.
> 1006. 1017. 1026. 1032. 1040.]
> Max trial success rate: 0.98, at action iteration: 1037. (total of 1039 actions, max excludes first 1037 actions)
> Max grasp success rate: 0.8586762075134168, at action iteration: 1037. (total of 1039 actions, max excludes first 1037 actions)
> Max place success rate: 0.824634655532359, at action iteration: 1037. (total of 1040 actions, max excludes first 1037 actions)
> Max action efficiency: 0.5901639344262295, at action iteration: 1039. (total of 1040 actions, max excludes first 1037 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-06-10-12-36_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-06-10-12-36_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/grasp-success-rate.log.csv
> saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-06-10-12-36_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/place-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-06-10-12-36_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/action-efficiency.log.csv
> saving plot: 2020-07-06-10-12-36_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Sim-Stack-SPOT-Trial-Reward-Masked-Testing_success_plot.png
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-06-10-12-36_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-06-10-12-36_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/best_stats.json
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-02-18-06-30_Sim-Stack-SPOT-Trial-Reward-Masked-Training/2020-07-06-06-25-21_Sim-Stack-SPOT-Trial-Reward-Masked-Testing
> Random Testing results:
> {'trial_success_rate_best_value': 0.98, 'trial_success_rate_best_index': 958, 'grasp_success_rate_best_value': 0.6551126516464472, 'grasp_success_rate_best_index': 958, 'place_success_rate_best_value': 0.8900523560209425, 'place_success_rate_best_index': 958, 'action_efficiency_best_value': 0.6200417536534447, 'action_efficiency_best_index': 960}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-07-02-18-06-30_Sim-Stack-SPOT-Trial-Reward-Masked-Training
> Training results:
> {'trial_success_rate_best_value': 0.8985507246376812, 'trial_success_rate_best_index': 18573, 'grasp_success_rate_best_value': 0.9195402298850575, 'grasp_success_rate_best_index': 14730, 'place_success_rate_best_value': 0.9240506329113924, 'place_success_rate_best_index': 14597, 'action_efficiency_best_value': 0.912, 'action_efficiency_best_index': 11041}
===================================================================
TODO:
STACKING DENSENET NO COMMON SENSE, NO TRIAL REWARD
-------------------------------------------------------------
GPU 1, port 19998
export CUDA_VISIBLE_DEVICES="0" && python main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --check_z_height --tcp_port 19998 --place --future_reward_discount 0.65
SIM TO REAL TESTING Pushing and Grasping
========================================
Testing Efficientnet model: https://github.com/jhu-lcsr/good_robot/releases/tag/push_grasp_v0.3.2
export CUDA_VISIBLE_DEVICES="0" && python main.py --push_rewards --experience_replay --explore_rate_decay --trial_reward --save_visualizations --common_sense --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --nn efficientnet --snapshot_file '/home/costar/Downloads/snapshot.reinforcement.pth'
saving plot: 2020-02-23-22-16-15_Real-Rows-SPOT-Trial-Reward-Common-Sense-Testing-Real-Rows-SPOT-Trial-Reward-Common-Sense-Testing_success_plot.png
saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-23-22-16-15_Real-Rows-SPOT-Trial-Reward-Common-Sense-Testing/data/best_stats.json
saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-23-22-16-15_Real-Rows-SPOT-Trial-Reward-Common-Sense-Testing/best_stats.json
Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-02-23-22-16-15_Real-Rows-SPOT-Trial-Reward-Common-Sense-Testing
Training results:
{'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 67, 'grasp_success_rate_best_value': 0.6829268292682927, 'grasp_success_rate_best_index': 68, 'place_success_rate_best_value': 0.8888888888888888, 'place_success_rate_best_index': 68, 'action_efficiency_best_value': 1.0746268656716418, 'action_efficiency_best_index': 69}
=====================================================================
2020-12-04
Test 3 time step depth history
TEST DEPTH CHANNEL HISTORY - SIM STACK - SPOT-Q-MASKED SPOT FRAMEWORK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-12-04
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --depth_channels_history
RESUME: ± export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --depth_channels_history --resume '/home/costar/src/real_good_robot/logs/2020-12-04-18-19-35_Sim-Stack-SPOT-Trial-Reward-Masked-Training'
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-12-04-18-19-35_Sim-Stack-SPOT-Trial-Reward-Mask
Commit: fb2014ad4738c8843acc88bb06241109a9042338 release tag: depth_channel_history_test_2020_12_05
GPU 0, Tab 2, port 19990, left v-rep window, v-rep tab 7
Here we are running an initial test with 3 time step depth history in the depth image channels. However, this version does not clear the older depth channels on an environment reset.
costar@costar-desktop|~/src/real_good_robot on history_devel!
± export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --depth_channels_history
Connected to simulation.
sim started 1: 0 sim started 2: 0
Adding object: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/objects/blocks/5.obj as shape_00
Adding object: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/objects/blocks/7.obj as shape_01
Adding object: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/objects/blocks/1.obj as shape_02
Adding object: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/objects/blocks/6.obj as shape_03
Adding object: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/objects/blocks/8.obj as shape_04
Adding object: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/objects/blocks/3.obj as shape_05
Adding object: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/objects/blocks/2.obj as shape_06
Adding object: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/objects/blocks/7.obj as shape_07
CUDA detected. Running with GPU acceleration.
/home/costar/.local/lib/python3.6/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reducti
on='none' instead.
warnings.warn(warning.format(ret))
ed-Training
Training stats after 7k actions:
Training Complete! Dir: /home/costar/src/real_good_robot/logs/2020-12-04-18-19-35_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Choosing a snapshot from the following options:{'trial_success_rate_best_value': 0.8085106382978723, 'trial_success_rate_best_index': 6178, 'grasp_success_rate_best_value': 0.7992700729927007, 'grasp_success_rate_best_index': 5236, 'place_success_rate_best_value': 0.8177570093457944, 'place_success_rate_best_index': 6437, 'action_efficiency_best_value': 0.492, 'action_efficiency_best_index': 6782}
TEST NO DEPTH CHANNEL HISTORY - SIM STACK - SPOT-Q-MASKED SPOT FRAMEWORK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-12-04
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-12-04-18-45-21_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Commit: fb2014ad4738c8843acc88bb06241109a9042338 release tag: depth_channel_history_test_2020_12_05
GPU 1, Tab 3, port 19998, left v-rep window, v-rep tab 7
± export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions
Connected to simulation.
CUDA detected. Running with GPU acceleration.
/home/costar/.local/lib/python3.6/site-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead.
warnings.warn(warning.format(ret))
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-12-04-18-45-21_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Notes: This run seems to progress normally, stopped after 8k actions. We were only testing for serious new bugs here, and it looks ok.
Training iteration: 8112
prev_height: 0.0 max_z: 0.1549978932002351 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
Current count of pixels with stuff: 9558.0 threshold below which the scene is considered empty: 300
Change detected: True (value: 650)
Primitive confidence scores: 1.382265 (push), 2.372996 (grasp), 2.777307 (place)
Strategy: exploit (exploration probability: 0.019483)
Action: grasp at (3, 200, 104)
Executing: grasp at (-0.516000, 0.176000, 0.050989) orientation: 1.178097
Trainer.get_label_value(): Current reward: 3.099157 Current reward multiplier: 3.099157 Predicted Future reward: 2.777307 Expected reward: 3.099157 + 0.650000 x 2.777307 = 4.904406
Running two step backprop()
Training loss: 1.389425
gripper position: 0.03529293090105057
gripper position: 0.026784205809235573
gripper position: 0.00438448041677475
Experience replay 21125: history timestep index 7434, action: grasp, surprise value: 1.534599
Training loss: 0.337143
gripper position: 0.0035619139671325684
gripper position: 0.0034254491329193115
Grasp successful: True
prev_height: 0.0 max_z: 0.1549610677813515 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
prev_height: 3.099156804111601 max_z: 3.09922135562703 goal_success: False needed to reset: False max_workspace_height: 2.699156804111601 <<<<<<<<<<<
check_stack() stack_height: 3.09922135562703 stack matches current goal: False partial_stack_success: False Does the code think a reset is needed: False
STACK: trial: 721 actions/partial: 5.599033816425121 actions/full stack: 21.23821989528796 (lower is better) Grasp Count: 4683, grasp success rate: 0.6638906683749733 place_on_stack_rate: 0.46621621621621623 place_attempts: 3108 partial_stack_successes: 1449 stack_successes: 382 trial_success_rate: 0.5298196948682385 stack goal: None current_height: 3.09922135562703
Experience replay 21126: history timestep index 6923, action: grasp, surprise value: 1.276310
DEPTH CHANNEL HISTORY, CLEAR CHANNELS ON RESET - SIM STACK - SPOT-Q-MASKED SPOT FRAMEWORK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-12-05
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --depth_channels_history
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-12-06-11-55-47_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Commit: 8e2305a1084ad8316cd5285353f0f4c88a9a8551 release tag:
GPU 1, Tab 3, port 19998, left v-rep window, v-rep tab 7
Here we are running an initial test with 3 time step depth history in the depth image channels. This version does clear the older depth channels on an environment reset.
DEPTH CHANNEL HISTORY, CLEAR CHANNELS ON RESET - SIM STACK - SPOT-Q-MASKED SPOT FRAMEWORK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-12-05
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --depth_channels_history
RESUME: export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --depth_channels_history --resume '/home/costar/src/real_good_robot/logs/2020-12-10-22-40-29_Sim-Stack-SPOT-Trial-Reward-Masked-Training'
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-12-10-22-40-29_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Commit: 1498ad121868dbcdb8b03ded7340ef04dbd52899 release tag:
GPU 1, Tab 3, port 19998, left v-rep window, v-rep tab 7
Note the test has to run again... it crashed after 53 trials
Here we are running an updated test with 3 time step depth history bugfixes in the depth image channels. This version does clear the older depth channels on an environment reset.
COLLECTING TEST STACK RESULTS WITH OBJECT POSITIONS, 200 test trials (on spot workstation) for language model training:
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --depth_channels_history --is_testing --disable_situation_removal --save_visualizations --stack_snapshot_file '/home/ahundt/Downloads/2020-12-10-22-40-29_Sim-Stack-SPOT-Trial-Reward-Masked-Training-Sim-Stack-SPOT-Trial-Reward-Masked/snapshot.reinforcement_trial_success_rate_best_value.pth' --max_test_trials 200
Creating data logging session: /home/ahundt/src/real_good_robot/logs/2021-02-13-16-30-37_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History
Test Commit: 5721c9e8fde6d26759d7663cb972e0802fb84207
GPU 1, Tab 1, port 19998, top right v-rep window, v-rep tab 7
Data in:
2021-02-13-16-30-37_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History-poor-efficiency-8-colors.zip
Note the efficiency of this model is fairly low, at 36-41%. May want to retrain another model to get the best 60%+ possible value.
> STACK: trial: 101 actions/partial: 4.984848484848484 actions/full stack: 16.45 (lower is better) Grasp Count: 908, grasp success rate: 0.7995594713656388 place_on_stack_rate: 0.45454545454545453 place_attempts: 726 partial_stack_successes: 330 stack_successes: 100 trial_success_rate: 0.9900990099009901 stack goal: None current_height: 1.022614398164782
> trial_complete_indices: [ 19. 29. 35. 41. 51. 57. 69. 78. 84. 90. 100. 110.
> 120. 127. 133. 172. 180. 206. 226. 251. 259. 277. 283. 289.
> 319. 339. 391. 408. 416. 447. 455. 463. 492. 512. 522. 528.
> 552. 558. 565. 571. 590. 598. 605. 611. 615. 626. 637. 660.
> 671. 679. 685. 691. 701. 718. 964. 973. 979. 1007. 1016. 1024.
> 1042. 1049. 1056. 1072. 1080. 1100. 1106. 1115. 1123. 1144. 1154. 1160.
> 1162. 1169. 1185. 1189. 1197. 1219. 1235. 1244. 1264. 1276. 1282. 1293.
> 1301. 1314. 1378. 1403. 1414. 1445. 1467. 1481. 1506. 1525. 1532. 1560.
> 1568. 1574. 1579. 1635. 1644.]
> Max trial success rate: 0.99, at action iteration: 1641. (total of 1643 actions, max excludes first 1641 actions)
> Max grasp success rate: 0.8002207505518764, at action iteration: 1641. (total of 1643 actions, max excludes first 1641 actions)
> Max place success rate: 0.6752717391304348, at action iteration: 1641. (total of 1644 actions, max excludes first 1641 actions)
> Max action efficiency: 0.3692870201096892, at action iteration: 1643. (total of 1644 actions, max excludes first 1641 actions)
> saving trial success rate: /home/ahundt/src/real_good_robot/logs/2021-02-14-04-49-09_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History/transitions/trial-success-rate.log.csv
> saving grasp success rate: /home/ahundt/src/real_good_robot/logs/2021-02-14-04-49-09_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History/transitions/grasp-success-rate.log.csv
> saving place success rate: /home/ahundt/src/real_good_robot/logs/2021-02-14-04-49-09_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History/transitions/place-success-rate.log.csv
> saving action efficiency: /home/ahundt/src/real_good_robot/logs/2021-02-14-04-49-09_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History/transitions/action-efficiency.log.csv
> saving plot: 2021-02-14-04-49-09_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History-Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History_success_plot.png
> saving best stats to: /home/ahundt/src/real_good_robot/logs/2021-02-14-04-49-09_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History/data/best_stats.json
> saving best stats to: /home/ahundt/src/real_good_robot/logs/2021-02-14-04-49-09_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History/best_stats.json
> Choosing a snapshot from the following options:{'trial_success_rate_best_value': 0.97, 'trial_success_rate_best_index': 3219, 'grasp_success_rate_best_value': 0.7554466230936819, 'grasp_success_rate_best_index': 3219, 'place_success_rate_best_value': 0.7312138728323699, 'place_success_rate_best_index': 3219, 'action_efficiency_best_value': 0.4193849021435228, 'action_efficiency_best_index': 3221}
> Evaluating trial_success_rate_best_value
> /home/ahundt/src/real_good_robot/logs/2021-02-13-16-30-37_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History/models/snapshot.reinforcement_trial_success_rate_best_value.pth does not exist, looking
> for other options.
> /home/ahundt/src/real_good_robot/logs/2021-02-13-16-30-37_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History/models/snapshot.reinforcement_grasp_success_rate_best_value.pth does not exist, looking
> for other options.
> Could not find any best-of models, checking for the basic training models.
> /home/ahundt/src/real_good_robot/logs/2021-02-13-16-30-37_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History/models/snapshot.reinforcement.pth does not exist, looking for other options.
> /home/ahundt/src/real_good_robot/logs/2021-02-13-16-30-37_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History/models/snapshot.reactive.pth does not exist, looking for other options.
> Shapshot chosen:
> Random Testing Complete! Dir: /home/ahundt/src/real_good_robot/logs/2021-02-13-16-30-37_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History/2021-02-14-04-49-09_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History
> Random Testing results:
> {'trial_success_rate_best_value': 0.99, 'trial_success_rate_best_index': 1641, 'grasp_success_rate_best_value': 0.8002207505518764, 'grasp_success_rate_best_index': 1641, 'place_success_rate_best_value': 0.6752717391304348, 'place_success_rate_best_index': 1641, 'action_efficiency_best_value': 0.3692870201096892, 'action_efficiency_best_index': 1643}
> Training Complete! Dir: /home/ahundt/src/real_good_robot/logs/2021-02-13-16-30-37_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Three-Step-History
> Training results:
> {'trial_success_rate_best_value': 0.97, 'trial_success_rate_best_index': 3219, 'grasp_success_rate_best_value': 0.7554466230936819, 'grasp_success_rate_best_index': 3219, 'place_success_rate_best_value': 0.7312138728323699, 'place_success_rate_best_index': 3219, 'action_efficiency_best_value': 0.4193849021435228, 'action_efficiency_best_index': 3221}
DEPTH CHANNEL HISTORY, PUSHING AND GRASPING WITH ALL FEATURES & SAVE ALL MODELS ACCORDING TO BEST STATS - costar 2020-12-27
--------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/toys --num_obj 10 --push_rewards --experience_replay --explore_rate_decay --common_sense --trial_reward --save_visualizations --future_reward_discount 0.65 --tcp_port 19990 --random_actions --depth_channels_history
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-12-27-14-25-51_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Training
Commit: 8742cbea2e53707fac32020c0f04b23b479575b5
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
DEPTH CHANNEL HISTORY, SIM ROW - Task Progress SPOT-Q MASKED - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-12-27
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --save_visualizations --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --common_sense --depth_channels_history
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-12-27-15-03-09_Sim-Rows-Two-Step-Reward-Masked-Training
Commit: 8742cbea2e53707fac32020c0f04b23b479575b5
GPU 1, Tab 1, port 19998, left v-rep window, v-rep tab 8
20k action run, need to run longer for better convergence
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-01-01-20-07_Sim-Rows-Two-Step-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-01-01-20-07_Sim-Rows-Two-Step-Reward-Masked-Testing/best_stats.json
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-12-27-15-03-09_Sim-Rows-Two-Step-Reward-Masked-Training/2021-01-01-01-20-07_Sim-Rows-Two-Step-Reward-Masked-Testing
> Random Testing results:
> {'trial_success_rate_best_value': 0.95, 'trial_success_rate_best_index': 924, 'grasp_success_rate_best_value': 0.8562874251497006, 'grasp_success_rate_best_index': 924, 'place_success_rate_best_value': 0.7759433962264151, 'place_success_rate_best_index': 924, 'action_efficiency_best_value': 0.6298701298701299, 'action_efficiency_best_index': 924}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-12-27-15-03-09_Sim-Rows-Two-Step-Reward-Masked-Training
> Training results:
> {'trial_success_rate_best_value': 0.7012987012987013, 'trial_success_rate_best_index': 15709, 'grasp_success_rate_best_value': 0.9049429657794676, 'grasp_success_rate_best_index': 9412, 'place_success_rate_best_value': 0.8387096774193549, 'place_success_rate_best_index': 19712, 'action_efficiency_best_value': 1.02, 'action_efficiency_best_index': 19459}
40k action run, reasonable expected convergence
> trial_complete_indices: [ 4. 12. 26. 32. 38. 42. 48. 62. 66. 74. 78. 80. 82. 86.
> 94. 98. 105. 112. 118. 130. 132. 137. 139. 144. 148. 150. 154. 158.
> 164. 170. 174. 178. 182. 186. 191. 195. 234. 242. 248. 253. 257. 259.
> 263. 268. 272. 276. 280. 286. 290. 292. 298. 302. 306. 310. 314. 318.
> 322. 326. 330. 334. 340. 344. 346. 350. 359. 361. 365. 369. 375. 379.
> 383. 387. 396. 401. 405. 411. 413. 418. 420. 426. 430. 436. 440. 444.
> 450. 454. 456. 460. 464. 468. 472. 478. 482. 489. 493. 497. 503. 507.
> 513. 517. 521.]
> Max trial success rate: 0.98, at action iteration: 518. (total of 520 actions, max excludes first 518 actions)
> Max grasp success rate: 0.9117647058823529, at action iteration: 518. (total of 520 actions, max excludes first 518 actions)
> Max place success rate: 0.8704453441295547, at action iteration: 518. (total of 521 actions, max excludes first 518 actions)
> Max action efficiency: 1.1583011583011582, at action iteration: 520. (total of 521 actions, max excludes first 518 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-08-19-05-03_Sim-Rows-Two-Step-Reward-Masked-Testing/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-08-19-05-03_Sim-Rows-Two-Step-Reward-Masked-Testing/transitions/grasp-success-rate.log.csv
> saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-08-19-05-03_Sim-Rows-Two-Step-Reward-Masked-Testing/transitions/place-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-08-19-05-03_Sim-Rows-Two-Step-Reward-Masked-Testing/transitions/action-efficiency.log.csv
> saving plot: 2021-01-08-19-05-03_Sim-Rows-Two-Step-Reward-Masked-Testing-Sim-Rows-Two-Step-Reward-Masked-Testing_success_plot.png
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:425: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(save_file + file_format, dpi=300, optimize=True)
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:427: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(log_dir_fig_file + file_format, dpi=300, optimize=True)
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-08-19-05-03_Sim-Rows-Two-Step-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-08-19-05-03_Sim-Rows-Two-Step-Reward-Masked-Testing/best_stats.json
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-12-27-15-03-09_Sim-Rows-Two-Step-Reward-Masked-Training/2021-01-08-19-05-03_Sim-Rows-Two-Step-Reward-Masked-Testing
> Random Testing results:
> {'trial_success_rate_best_value': 0.98, 'trial_success_rate_best_index': 518, 'grasp_success_rate_best_value': 0.9117647058823529, 'grasp_success_rate_best_index': 518, 'place_success_rate_best_value': 0.8704453441295547, 'place_success_rate_best_index': 518, 'action_efficiency_best_value': 1.1583011583011582, 'action_efficiency_best_index': 520}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-12-27-15-03-09_Sim-Rows-Two-Step-Reward-Masked-Training
> Training results:
> {'trial_success_rate_best_value': 0.7950819672131147, 'trial_success_rate_best_index': 28882, 'grasp_success_rate_best_value': 0.9049429657794676, 'grasp_success_rate_best_index': 9412, 'place_success_rate_best_value': 0.8954545454545455, 'place_success_rate_best_index': 38007, 'action_efficiency_best_value': 1.284, 'action_efficiency_best_index': 38033}
COLLECTING TEST ROW RESULTS WITH OBJECT POSITIONS, 200 test trials (on spot workstation) for language model training:
export CUDA_VISIBLE_DEVICES="2" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --save_visualizations --check_row --tcp_port 19999 --place --future_reward_discount 0.65 --max_train_actions 40000 --random_actions --common_sense --depth_channels_history --is_testing --disable_situation_removal --save_visualizations --row_snapshot_file '/home/ahundt/Downloads/2020-12-27-15-03-09-sim-rows-progress-40k-densenet-model-with-history/2020-12-27-15-03-09-sim-rows-progress-40k/snapshot.reinforcement_action_efficiency_best_value.pth' --max_test_trials 200
Pre-trained model snapshot loaded from: /home/ahundt/Downloads/2020-12-27-15-03-09-sim-rows-progress-40k-densenet-model-with-history/2020-12-27-15-03-09-sim-rows-progress-40k/snapshot.reinforcement_action_efficiency_best_value.pth
Creating data logging session: /home/ahundt/src/real_good_robot/logs/2021-02-13-16-43-06_Sim-Rows-Two-Step-Reward-Masked-Testing-Three-Step-History
DEPTH CHANNEL HISTORY, EFFICIENTNET, PUSHING AND GRASPING WITH ALL FEATURES & SAVE ALL MODELS ACCORDING TO BEST STATS - costar 2021-01-03
--------------------------------------
± export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/toys --num_obj 10 --push_rewards --experience_replay --explore_rate_decay --common_sense --trial_reward --save_visualizations --future_reward_discount 0.65 --tcp_port 19990 --random_actions --depth_channels_history --max_train_actions 20000 --nn efficientnet --num_dilation 1
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-03-19-07-05_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Training
Commit: 8742cbea2e53707fac32020c0f04b23b479575b5
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab
Maybe had a bad step? this doesn't match past efficientnet runs. May need to re-run. There was also a bug in the current progress logging.
> Max trial success rate: 1.0, at action iteration: 1004. (total of 1006 actions, max excludes first 1004 actions)
> max trial successes: 110.0
> individual_arrangement_trial_success_rate: [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0.9]
> senarios_100_percent_complete: 10
> Max grasp success rate: 0.5173116089613035, at action iteration: 1004. (total of 1006 actions, max excludes first 1004 actions)
> Max grasp action efficiency: 0.5059760956175299, at action iteration: 1004. (total of 1007 actions, max excludes first 1004 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-07-20-38-29_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Challenging-Arrangements/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-07-20-38-29_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Challenging-Arrangements/transitions/grasp-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-07-20-38-29_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Challenging-Arrangements/transitions/action-efficiency.log.csv
> saving plot: 2021-01-07-20-38-29_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Challenging-Arrangements-Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Challenging-Arrangements_success_plot.png
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:425: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(save_file + file_format, dpi=300, optimize=True)
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:427: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(log_dir_fig_file + file_format, dpi=300, optimize=True)
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-07-20-38-29_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Challenging-Arrangements/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-07-20-38-29_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Challenging-Arrangements/best_stats.json
> Challenging Arrangements Preset Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-03-19-07-05_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Training/2021-01-07-20-38-29_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Challenging-Arrangements
> Challenging Arrangements Preset Testing results:
> {'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 1004, 'senarios_100_percent_complete': 10, 'grasp_success_rate_best_value': 0.5173116089613035, 'grasp_success_rate_best_index': 1004, 'grasp_action_efficiency_best_value': 0.5059760956175299, 'grasp_action_efficiency_best_index': 1004}
> Choosing a snapshot from the following options:{'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 696, 'grasp_success_rate_best_value': 0.9421157684630739, 'grasp_success_rate_best_index': 19787, 'grasp_action_efficiency_best_value': 0.944, 'grasp_action_efficiency_best_index': 19787}
> Evaluating trial_success_rate_best_value
> The trial_success_rate_best_value is fantastic at 1.0, so we will look for the best grasp_success_rate_best_value.
> Shapshot chosen: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-03-19-07-05_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Training/models/snapshot.reinforcement_grasp_success_rate_best_value.pth
> Challenging Arrangements Preset Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-03-19-07-05_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Training/2021-01-07-20-38-29_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Challenging-Arrangements
> Challenging Arrangements Preset Testing results:
> {'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 1004, 'senarios_100_percent_complete': 10, 'grasp_success_rate_best_value': 0.5173116089613035, 'grasp_success_rate_best_index': 1004, 'grasp_action_efficiency_best_value': 0.5059760956175299, 'grasp_action_efficiency_best_index': 1004}
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-03-19-07-05_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Training/2021-01-07-15-42-45_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Testing
> Random Testing results:
> {'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 1208, 'grasp_success_rate_best_value': 0.8038397328881469, 'grasp_success_rate_best_index': 1208, 'grasp_action_efficiency_best_value': 0.7971854304635762, 'grasp_action_efficiency_best_index': 1208}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-03-19-07-05_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Training
> Training results:
> {'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 696, 'grasp_success_rate_best_value': 0.9421157684630739, 'grasp_success_rate_best_index': 19787, 'grasp_action_efficiency_best_value': 0.944, 'grasp_action_efficiency_best_index': 19787}
>
DEPTH CHANNEL HISTORY, EFFICIENTNET - SIM ROW - Task Progress SPOT-Q MASKED - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-01-09
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --save_visualizations --check_row --tcp_port 19998 --place --future_reward_discount 0.65 --max_train_actions 40000 --random_actions --common_sense --depth_channels_history --nn efficientnet --num_dilation 1
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-09-11-53-31_Sim-Rows-Two-Step-Reward-Masked-Training
Commit: 069aa7d5a9f6e29d8b825607080c39a84c363aa4
GPU 1, Tab 1, port 19998, left v-rep window, v-rep tab 8
> Max trial success rate: 0.94, at action iteration: 794. (total of 796 actions, max excludes first 794 actions)
> Max grasp success rate: 0.8561484918793504, at action iteration: 794. (total of 796 actions, max excludes first 794 actions)
> Max place success rate: 0.695054945054945, at action iteration: 796. (total of 797 actions, max excludes first 794 actions)
> Max action efficiency: 0.7178841309823678, at action iteration: 796. (total of 797 actions, max excludes first 794 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-18-07-52-00_Sim-Rows-Two-Step-Reward-Masked-Testing/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-18-07-52-00_Sim-Rows-Two-Step-Reward-Masked-Testing/transitions/grasp-success-rate.log.csv
> saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-18-07-52-00_Sim-Rows-Two-Step-Reward-Masked-Testing/transitions/place-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-18-07-52-00_Sim-Rows-Two-Step-Reward-Masked-Testing/transitions/action-efficiency.log.csv
> saving plot: 2021-01-18-07-52-00_Sim-Rows-Two-Step-Reward-Masked-Testing-Sim-Rows-Two-Step-Reward-Masked-Testing_success_plot.png
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:425: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(save_file + file_format, dpi=300, optimize=True)
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:427: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(log_dir_fig_file + file_format, dpi=300, optimize=True)
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-18-07-52-00_Sim-Rows-Two-Step-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-18-07-52-00_Sim-Rows-Two-Step-Reward-Masked-Testing/best_stats.json
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-09-11-53-31_Sim-Rows-Two-Step-Reward-Masked-Training/2021-01-18-04-31-55_Sim-Rows-Two-Step-Reward-Masked-Testing
> Random Testing results:
> {'trial_success_rate_best_value': 0.95, 'trial_success_rate_best_index': 881, 'grasp_success_rate_best_value': 0.9136069114470843, 'grasp_success_rate_best_index': 881, 'place_success_rate_best_value': 0.6706443914081146, 'place_success_rate_best_index': 881, 'action_efficiency_best_value': 0.674233825198638, 'action_efficiency_best_index': 883}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-09-11-53-31_Sim-Rows-Two-Step-Reward-Masked-Training
> Training results:
> {'trial_success_rate_best_value': 0.75, 'trial_success_rate_best_index': 38204, 'grasp_success_rate_best_value': 0.8587360594795539, 'grasp_success_rate_best_index': 30979, 'place_success_rate_best_value': 0.8054298642533937, 'place_success_rate_best_index': 39279, 'action_efficiency_best_value': 0.972, 'action_efficiency_best_index': 38122}
DEPTH CHANNEL HISTORY, EFFICIENTNET - SIM STACK - SPOT-Q-MASKED SPOT FRAMEWORK - COMMON SENSE - TRIAL REWARD - FULL FEATURED RUN - SORT TRIAL REWARD - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-01-09
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --trial_reward --common_sense --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 40000 --random_actions --depth_channels_history --nn efficientnet --num_dilation 1
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-09-12-02-05_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Commit: 069aa7d5a9f6e29d8b825607080c39a84c363aa4 release tag:
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
> Max trial success rate: 0.98, at action iteration: 853. (total of 855 actions, max excludes first 853 actions)
> Max grasp success rate: 0.9116279069767442, at action iteration: 853. (total of 855 actions, max excludes first 853 actions)
> Max place success rate: 0.8466981132075472, at action iteration: 853. (total of 856 actions, max excludes first 853 actions)
> Max action efficiency: 0.6963657678780774, at action iteration: 855. (total of 856 actions, max excludes first 853 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-17-15-03-35_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-17-15-03-35_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/grasp-success-rate.log.csv
> saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-17-15-03-35_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/place-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-17-15-03-35_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/action-efficiency.log.csv
> saving plot: 2021-01-17-15-03-35_Sim-Stack-SPOT-Trial-Reward-Masked-Testing-Sim-Stack-SPOT-Trial-Reward-Masked-Testing_success_plot.png
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:425: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(save_file + file_format, dpi=300, optimize=True)
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:427: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(log_dir_fig_file + file_format, dpi=300, optimize=True)
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-17-15-03-35_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-17-15-03-35_Sim-Stack-SPOT-Trial-Reward-Masked-Testing/best_stats.json
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-09-12-02-05_Sim-Stack-SPOT-Trial-Reward-Masked-Training/2021-01-17-10-51-39_Sim-Stack-SPOT-Trial-Reward-Masked-Testing
> Random Testing results:
> {'trial_success_rate_best_value': 0.98, 'trial_success_rate_best_index': 1015, 'grasp_success_rate_best_value': 0.731239092495637, 'grasp_success_rate_best_index': 1015, 'place_success_rate_best_value': 0.8397291196388262, 'place_success_rate_best_index': 1015, 'action_efficiency_best_value': 0.5852216748768473, 'action_efficiency_best_index': 1017}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-09-12-02-05_Sim-Stack-SPOT-Trial-Reward-Masked-Training
> Training results:
> {'trial_success_rate_best_value': 0.9459459459459459, 'trial_success_rate_best_index': 29701, 'grasp_success_rate_best_value': 0.9609375, 'grasp_success_rate_best_index': 30730, 'place_success_rate_best_value': 0.9401709401709402, 'place_success_rate_best_index': 28296, 'action_efficiency_best_value': 1.752, 'action_efficiency_best_index': 12895}
DEPTH CHANNEL HISTORY, EFFICIENTNET, PUSHING AND GRASPING WITH ALL FEATURES & SAVE ALL MODELS ACCORDING TO BEST STATS - costar 2021-01-21
--------------------------------------
± export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/toys --num_obj 10 --push_rewards --experience_replay --explore_rate_decay --common_sense --trial_reward --save_visualizations --future_reward_discount 0.65 --tcp_port 19990 --random_actions --depth_channels_history --max_train_actions 20000 --nn efficientnet --num_dilation 1
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-01-21-15-59-44_Sim-Push-and-Grasp-SPOT-Trial-Reward-Masked-Training
Commit: 0094dce46d68927f57869f7ba0a75122c5423a64
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab
DEPTH CHANNEL HISTORY, DENSENET - SIM 2x2 VERTICAL SQUARE - SPOT-Q-MASKED SPOT FRAMEWORK - COMMON SENSE - PROGRESS REWARD - FULL FEATURED RUN - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-02-08
----------------------------------------------------------------------------------------
± export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --common_sense --place --future_reward_discount 0.65 --tcp_port 19990 --random_seed 1238 --max_train_actions 40000 --random_actions --task_type vertical_square --depth_channels_history
CANCELLED due to place mask bug: Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-09-12-47-05_Sim-Vertical-Square-Two-Step-Reward-Masked-Training-Three-Step-History
CANCELLED due to place mask bug: Commit: d7a2e679edfdae0a0a7534fae0bb4ac1397f77ca
CANCELLED Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-09-15-19-21_Sim-Vertical-Square-Two-Step-Reward-Masked-Training-Three-Step-History
Commit: a98b11806fcd3e007b13daae66ed343badc9a94f
# we found a bug
CANCELLED RESUME: Commit: f67622f4d3ed5dbe24e4ff74646ccf425e8a71e5
CANCELLED RESUME: ± export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --common_sense --place --future_reward_discount 0.65 --tcp_port 19990 --random_seed 1238 --max_train_actions 40000 --random_actions --task_type vertical_square --depth_channels_history --resume '/media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-09-15-19-21_Sim-Vertical-Square-Two-Step-Reward-Masked-Training-Three-Step-History'
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-09-22-31-35_Sim-Vertical-Square-Two-Step-Reward-Masked-Training-Three-Step-History
Commit: 1f61699ac1a29f00fcfd787fe5e4ce39b0fb46d0
GPU 0, Tab 0, port 19990, left v-rep window, v-rep tab 7
> STACK: trial: 101 actions/partial: 4.437246963562753 actions/full stack: 12.744186046511627 (lower is better) Grasp Count: 584, grasp success rate: 0.8801369863013698 place_on_stack_rate: 0.482421875 place_attempts: 512 partial_stack_successes: 247 stack_successes: 86 trial_success_rate: 0.8514851485148515 stack goal: [3 0] current_height: 2
> trial_complete_indices: [ 4. 10. 14. 23. 27. 49. 57. 87. 95. 99. 106. 114.
> 120. 134. 138. 157. 161. 171. 177. 183. 213. 243. 247. 251.
> 255. 270. 280. 310. 326. 330. 334. 338. 354. 363. 369. 391.
> 398. 412. 442. 451. 474. 480. 510. 538. 542. 569. 573. 603.
> 611. 620. 624. 629. 635. 644. 648. 672. 676. 681. 687. 691.
> 695. 711. 719. 728. 732. 738. 742. 746. 760. 774. 804. 820.
> 826. 848. 854. 860. 868. 873. 881. 887. 893. 899. 907. 932.
> 936. 952. 956. 969. 998. 1005. 1009. 1013. 1018. 1024. 1030. 1034.
> 1052. 1058. 1069. 1091. 1095.]
> Max trial success rate: 0.85, at action iteration: 1092. (total of 1094 actions, max excludes first 1092 actions)
> Max grasp success rate: 0.8814432989690721, at action iteration: 1092. (total of 1094 actions, max excludes first 1092 actions)
> Max place success rate: 0.45401174168297453, at action iteration: 1092. (total of 1095 actions, max excludes first 1092 actions)
> Max action efficiency: 0.4725274725274725, at action iteration: 1092. (total of 1095 actions, max excludes first 1092 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-20-01-25-00_Sim-Vertical-Square-Two-Step-Reward-Masked-Testing-Three-Step-History/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-20-01-25-00_Sim-Vertical-Square-Two-Step-Reward-Masked-Testing-Three-Step-History/transitions/grasp-success-rate.log.csv
> saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-20-01-25-00_Sim-Vertical-Square-Two-Step-Reward-Masked-Testing-Three-Step-History/transitions/place-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-20-01-25-00_Sim-Vertical-Square-Two-Step-Reward-Masked-Testing-Three-Step-History/transitions/action-efficiency.log.csv
> saving plot: 2021-02-20-01-25-00_Sim-Vertical-Square-Two-Step-Reward-Masked-Testing-Three-Step-History-Sim-Vertical-Square-Two-Step-Reward-Masked-Testing-Three-Step-History_success_plot.png
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:438: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(save_file + file_format, dpi=300, optimize=True)
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:440: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(log_dir_fig_file + file_format, dpi=300, optimize=True)
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-20-01-25-00_Sim-Vertical-Square-Two-Step-Reward-Masked-Testing-Three-Step-History/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-20-01-25-00_Sim-Vertical-Square-Two-Step-Reward-Masked-Testing-Three-Step-History/best_stats.json
> Random Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-09-22-31-35_Sim-Vertical-Square-Two-Step-Reward-Masked-Training-Three-Step-History/2021-02-20-01-25-00_Sim-Vertical-Square-Two-Step-Reward-Masked-Testing-Three-Step-History
> Random Testing results:
> {'trial_success_rate_best_value': 0.85, 'trial_success_rate_best_index': 1092, 'grasp_success_rate_best_value': 0.8814432989690721, 'grasp_success_rate_best_index': 1092, 'place_success_rate_best_value': 0.45401174168297453, 'place_success_rate_best_index': 1092, 'action_efficiency_best_value': 0.4725274725274725, 'action_efficiency_best_index': 1092}
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-09-22-31-35_Sim-Vertical-Square-Two-Step-Reward-Masked-Training-Three-Step-History
> Training results:
> {'action_efficiency_best_index': 16788, 'action_efficiency_best_value': 0.612, 'grasp_success_rate_best_index': 38828, 'grasp_success_rate_best_value': 0.908745247148289, 'place_success_rate_best_index': 16829, 'place_success_rate_best_value': 0.5770925110132159, 'trial_success_rate_best_index': 28806, 'trial_success_rate_best_value': 0.7916666666666666}
VERTICAL SQUARE 2021-02-09-22-31-35 BEST TEST RUN: {"ablation": null, "check_row": false, "check_z_height": false, "check_z_height_goal": 4.0, "check_z_height_max": 6.0, "common_sense": true, "demo_path": null, "depth_channels_history": true, "disable_situation_removal": false, "disable_two_step_backprop": false, "discounted_reward": false, "evaluate_random_objects": false, "experience_replay": true, "explore_rate_decay": true, "flops": false, "force_cpu": false, "future_reward_discount": 0.65, "grasp_color_task": false, "grasp_only": false, "heightmap_resolution": 0.002, "heuristic_bootstrap": false, "is_sim": true, "is_testing": true, "max_iter": -1, "max_test_trials": 100, "max_train_actions": 40000, "method": "reinforcement", "nn": "densenet", "no_common_sense_backprop": false, "no_height_reward": false, "num_dilation": 0, "num_extra_obj": 0, "num_obj": 4, "obj_mesh_dir": "objects/blocks", "place": true, "plot_window": 500, "push_rewards": true, "random_actions": true, "random_seed": 1238, "random_trunk_weights_max": 0, "random_trunk_weights_min_success": 4, "random_trunk_weights_reset_iters": 0, "random_weights": false, "resume": null, "row_snapshot_file": "", "rtc_host_ip": "192.168.1.155", "rtc_port": 30003, "save_visualizations": true, "show_heightmap": false, "show_preset_cases_then_exit": false, "skip_noncontact_actions": false, "snapshot_file": "/media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-09-22-31-35_Sim-Vertical-Square-Two-Step-Reward-Masked-Training-Three-Step-History/models/snapshot.reinforcement_trial_success_rate_best_value.pth", "stack_snapshot_file": "", "static_language_mask": false, "task_type": "vertical_square", "tcp_host_ip": "192.168.1.155", "tcp_port": 19990, "test_preset_cases": false, "test_preset_dir": "simulation/test-cases/", "test_preset_file": "", "timeout": 60, "transfer_grasp_to_place": false, "trial_reward": false, "unstack": false, "unstack_snapshot_file": "", "use_demo": false, "vertical_square_snapshot_file": ""}
> {"action_efficiency_best_index": 4514, "action_efficiency_best_value": 0.11835106382978723, "grasp_success_rate_best_index": 4512, "grasp_success_rate_best_value": 0.8905660377358491, "place_success_rate_best_index": 4512, "place_success_rate_best_value": 0.45112781954887216, "trial_success_rate_best_index": 4512, "trial_success_rate_best_value": 0.88}
> total trials: 101 (clearance_length, total number of trials)
> num trials evaluated: 100 start trial: 0
> avg max height: 3.88 (higher is better, find max height for each trial, then average those values)
> avg max progress: 0.97 (higher is better, (avg(round(max_heights))/4.0))
> avg reversals: 0.44 (lower is better)
> avg recoveries: 0.7272727272727273 (higher is better, no need for recovery attempts is best)
> avg logged trial success: 0.88 (successful trials according to trial_success_log.txt)
> avg trial success: 0.88 (higher is better, (success_height - epsilon) height or higher)
> action efficiency with 6 action per trial optimum: 0.13315579227696406 action efficiency with 4 action per trial optimum: 0.0887705281846427
> data dir: /home/costar/src/real_good_robot/logs/2021-02-09-22-31-35_Sim-Vertical-Square-Two-Step-Reward-Masked-Training-Three-Step-History/2021-02-18-17-40-48_Sim-Vertical-Square-Two-Step-Reward-Masked-Testing-Three-Step-History
X DEPTH CHANNEL HISTORY, DENSENET - SIM UNSTACKING - SPOT-Q-MASKED SPOT FRAMEWORK - COMMON SENSE - PROGRESS REWARD - FULL FEATURED RUN - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-02-08
X ----------------------------------------------------------------------------------------
X
X cancelled due to unstacking reset bugs
X
X ± export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --common_sense --place --future_reward_discount 0.65 --tcp_port 19998 --random_seed 1238 --max_train_actions 40000 --random_actions --task_type unstacking --depth_channels_history
X CANCELLED DUE TO PLACE COUNT BUG: Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-09-15-11-28_Sim-Unstacking-Two-Step-Reward-Masked-Training-Three-Step-History
X CANCELLED DUE TO PLACE COUNT BUG: Commit: a98b11806fcd3e007b13daae66ed343badc9a94f
X Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-09-22-37-32_Sim-Unstacking-Two-Step-Reward-Masked-Training-Three-Step-History
X BUGGY Commit: 1f61699ac1a29f00fcfd787fe5e4ce39b0fb46d0
X RESUME (randomize block order during resets, start around iteration 3k): export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --common_sense --place --future_reward_discount 0.65 --tcp_port 19998 --random_seed 1238 --max_iter 40000 --random_actions --task_type unstacking --depth_channels_history --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-09-22-37-32_Sim-Unstacking-Two-Step-Reward-Masked-Training-Three-Step-History
X RESUME Commit: a8de19fcc5a137a4ac40b1a6204d72f164aa0a96
X RESUME2 (fix push knocking stack over not considered a progress reversal, resumed at around 3400 actions)
X Commit : 8c1f7c1ea7032b4743a75520af6d6bc697c77f12
X GPU 1, Tab 1, port 19998, right v-rep window, v-rep tab 8
x DEPTH CHANNEL HISTORY, DENSENET - SIM UNSTACKING - SPOT-Q-MASKED SPOT FRAMEWORK - COMMON SENSE - PROGRESS REWARD - FULL FEATURED RUN - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-02-11
x ----------------------------------------------------------------------------------------
x
x cancelled due to place + topple -> success bug.
x
X ± export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --common_sense --place --future_reward_discount 0.65 --tcp_port 19998 --random_seed 1238 --max_train_actions 40000 --random_actions --task_type unstacking --depth_channels_history
X Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-11-17-11-57_Sim-Unstacking-Two-Step-Reward-Masked-Training-Three-Step-History
X PUSH BUG: Commit : 3576829f60c047d4d5b2465e144d12e0ff7fc950
X RESUME: export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --common_sense --place --future_reward_discount 0.65 --tcp_port 19998 --random_seed 1238 --max_train_actions 40000 --random_actions --task_type unstacking --depth_channels_history --resume '/media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-11-20-52-14_Sim-Unstacking-Two-Step-Reward-Masked-Training-Three-Step-History'
X First resume Commit: 0cd323c2ec8509e3b5f1240897c7c1360824368c
X Commit: 81b1e73a4b6248ebedc5443fed0520ff0bebd777
X GPU 1, Tab 1, port 19998, right v-rep window, v-rep tab 8
DEPTH CHANNEL HISTORY, DENSENET - SIM 2x2 VERTICAL SQUARE - SPOT-Q-MASKED SPOT FRAMEWORK - COMMON SENSE - PROGRESS REWARD - FULL FEATURED RUN - REWARD SCHEDULE 0.1, 1, 1 - costar 2020-02-07
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --common_sense --place --future_reward_discount 0.65 --tcp_port 19998 --random_seed 1238 --max_train_actions 40000 --random_actions --task_type vertical_square --depth_channels_history
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-17-13-30-05_Sim-Vertical-Square-Two-Step-Reward-Masked-Training-Three-Step-History
Commit: c989e94999ed44eba3835c4a1d8bca3a7b5dec18
Resume after 850 actions
Resume: export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --common_sense --place --future_reward_discount 0.65 --tcp_port 19998 --random_seed 1238 --max_train_actions 40000 --random_actions --task_type vertical_square --depth_channels_history --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-17-13-30-05_Sim-Vertical-Square-Two-Step-Reward-Masked-Training-Three-Step-History
Commit: 59f321193e852e9927a81aace7dc47a62b7b4b08
resume due to some tweaks for detecting simulator errors
Resume2: export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --common_sense --place --future_reward_discount 0.65 --tcp_port 19998 --random_seed 1238 --max_train_actions 40000 --random_actions --task_type vertical_square --depth_channels_history --resume /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-17-13-30-05_Sim-Vertical-Square-Two-Step-Reward-Masked-Training-Three-Step-History
Commit: 82b571830ee5ae6da5630d12b9e3c649f075f8d0
GPU 1, Tab 1, port 19998, right v-rep window, v-rep tab 8
SIM STACK - SPOT STANDARD - TRIAL REWARD - RANDOM ACTIONS - REWARD SCHEDULE 0.1, 1, 1 - workstation named costar 2021-07-20
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --check_z_height --tcp_port 19990 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --trial_reward --common_sense
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-20-16-41-11_Sim-Stack-SPOT-Trial-Reward-Masked-Training
Commit: b8798ea07167dff5c8fcf5cd11c3ace2b4a0e22d
Resume Commit: 61758dd9f669e57245160a3e1aad4017bafb88df (stopped to try a separate experiment, not due to a problem with this run)
GPU 0, Tab 0, port 19990, left center v-rep window, v-rep tab 7
config mistake - SIM TO REAL stacking - load multiple time step model, run with single time step
----------------------------------------------------------------------------------------
Commit: 8d08a160efc75f91dc358a88cfa61334b799b06c
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --save_visualizations --random_actions --common_sense --trial_reward --snapshot_file '/home/costar/Downloads/2021-02-18-17-27-35_Sim-Stack-SPOT-Trial-Reward-Masked-Training-Three-Step-History-brst-model/snapshot.reinforcement_action_efficiency_best_value.pth'
Creating data logging session: '/home/costar/src/real_good_robot/logs/2021-02-27-17-26-27_Real-Stack-SPOT-Trial-Reward-Masked-Testing'
Note: one successful trial was incorrectly marked as a failure, I believe trial 6 or 7.
The trial success rate of this is actually 100%, the trial detector made an error.
> WARNING: get_heightmap() depth_heightmap contains negative heights with min -0.02579273700402497, saved depth heightmap png files may be invalid! See README.md for instructions to collect the depth heightmap again. Clipping the minimum to 0 for now.
> prev_height: 0.0 max_z: 0.03368447387620977 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
> running check_stack_update_goal for grasp action
> prev_height: 1.0 max_z: 0.7410584252766149 goal_success: False needed to reset: False max_workspace_height: 0.56 <<<<<<<<<<<
> check_stack() stack_height: 0.7410584252766149 stack matches current goal: False partial_stack_success: False Does the code think a reset is needed: False Does the code think the stack toppled: None
> main.py() process_actions: place_success: False
> main.py() process_actions: partial_stack_success: False
> STACK: trial: 11 actions/partial: 3.4285714285714284 actions/full stack: 14.4 (lower is better) Grasp Count: 85, grasp success rate: 0.6941176470588235 place_on_stack_rate: 0.711864406779661 place_attempts: 59 partial_stack_successes: 42 stack_successes: 10 trial_success_rate: 0.9090909090909091 stack goal: None current_height: 0.7410584252766149
> Move to Home Position Complete
> tcp port: 192.168.1.155
> Move to Home Position Complete
> trial_complete_indices: [ 13. 20. 28. 48. 65. 71. 96. 105. 113. 124. 143.]
> Max trial success rate: 0.9, at action iteration: 140. (total of 142 actions, max excludes first 140 actions)
> Max grasp success rate: 0.7108433734939759, at action iteration: 141. (total of 142 actions, max excludes first 140 actions)
> Max place success rate: 0.8305084745762712, at action iteration: 142. (total of 143 actions, max excludes first 140 actions)
> Max action efficiency: 0.5142857142857142, at action iteration: 142. (total of 143 actions, max excludes first 140 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-27-17-26-27_Real-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-27-17-26-27_Real-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/grasp-success-rate.log.csv
> saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-27-17-26-27_Real-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/place-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-27-17-26-27_Real-Stack-SPOT-Trial-Reward-Masked-Testing/transitions/action-efficiency.log.csv
> saving plot: 2021-02-27-17-26-27_Real-Stack-SPOT-Trial-Reward-Masked-Testing-Real-Stack-SPOT-Trial-Reward-Masked-Testing_success_plot.png
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:439: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(save_file + file_format, dpi=300, optimize=True)
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:441: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(log_dir_fig_file + file_format, dpi=300, optimize=True)
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-27-17-26-27_Real-Stack-SPOT-Trial-Reward-Masked-Testing/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-27-17-26-27_Real-Stack-SPOT-Trial-Reward-Masked-Testing/best_stats.json
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-27-17-26-27_Real-Stack-SPOT-Trial-Reward-Masked-Testing
> Training results:
> {'trial_success_rate_best_value': 0.9, 'trial_success_rate_best_index': 140, 'grasp_success_rate_best_value': 0.7108433734939759, 'grasp_success_rate_best_index': 141, 'place_success_rate_best_value': 0.8305084745762712, 'place_success_rate_best_index': 142, 'action_efficiency_best_value': 0.5142857142857142, 'action_efficiency_best_index': 142}
SIM TO REAL stacking - load multiple time step model
----------------------------------------------------------------------------------------
Commit: 8d08a160efc75f91dc358a88cfa61334b799b06c
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --num_obj 8 --push_rewards --experience_replay --explore_rate_decay --check_z_height --place --future_reward_discount 0.65 --is_testing --random_seed 1238 --max_test_trials 10 --save_visualizations --random_actions --common_sense --trial_reward --depth_channels_history --snapshot_file '/home/costar/Downloads/2021-02-18-17-27-35_Sim-Stack-SPOT-Trial-Reward-Masked-Training-Three-Step-History-brst-model/snapshot.reinforcement_action_efficiency_best_value.pth'
TODO(ahundt) run this
=================================================================
good robot, watch this!
Commit ID: `c5e37aca2d66f31099be3dc0d3eda5b7cc0d621b`
Stack:
python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --common_sense --place --random_seed 1238 --max_test_trials 50 --task_type stack --is_testing --use_demo --demo_path demos/stack_demos/ --row_snapshot_file logs/base_models/rows_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --vertical_square_snapshot_file logs/base_models/vertical_square_hist_densenet/snapshot.reinforcement_trial_success_rate_best_value.pth --unstack_snapshot_file logs/base_models/unstacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --grasp_only --depth_channels_history --cycle_consistency --no_common_sense_backprop --timeout 120
Row:
python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --common_sense --place --random_seed 1238 --max_test_trials 50 --task_type row --is_testing --use_demo --demo_path demos/row_demos/ --stack_snapshot_file logs/base_models/stacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --vertical_square_snapshot_file logs/base_models/vertical_square_hist_densenet/snapshot.reinforcement_trial_success_rate_best_value.pth --unstack_snapshot_file logs/base_models/unstacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --grasp_only --depth_channels_history --cycle_consistency --no_common_sense_backprop --timeout 120
Unstacking:
python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --common_sense --place --random_seed 1238 --max_test_trials 50 --task_type unstack --is_testing --use_demo --demo_path demos/unstacking_demos/ --stack_snapshot_file logs/base_models/stacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --vertical_square_snapshot_file logs/base_models/vertical_square_hist_densenet/snapshot.reinforcement_trial_success_rate_best_value.pth --row_snapshot_file logs/base_models/rows_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --grasp_only --depth_channels_history --cycle_consistency --no_common_sense_backprop --timeout 120
Vertical Square:
python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --common_sense --place --random_seed 1238 --max_test_trials 50 --task_type vertical_square --is_testing --use_demo --demo_path demos/vertical_square_demos/ --stack_snapshot_file logs/base_models/best_stack/snapshot.reinforcement_action_efficiency_best_value.pth --unstack_snapshot_file logs/base_models/best_unstacking/snapshot.reinforcement_trial_success_rate_best_value.pth --row_snapshot_file logs/base_models/best_rows/snapshot.reinforcement_trial_success_rate_best_value.pth --grasp_only --cycle_consistency --no_common_sense_backprop --timeout 120
SIM WATCH THIS VERTICAL SQUARE - GLOBAL CYCLE CONSISTENCY - with backprop - 2021-02-27
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="1" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --common_sense --place --random_seed 1238 --max_test_trials 50 --task_type vertical_square --is_testing --use_demo --demo_path demos/vertical_square_demos/ --stack_snapshot_file 'logs/base_models/best_stack/snapshot.reinforcement_trial_success_rate_best_index.pth' --unstack_snapshot_file logs/base_models/best_unstacking/snapshot.reinforcement_trial_success_rate_best_value.pth --row_snapshot_file logs/base_models/best_rows/snapshot.reinforcement_trial_success_rate_best_value.pth --grasp_only --cycle_consistency --no_common_sense_backprop --timeout 120 --tcp_port 19998
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-27-16-36-11_Sim-Vertical-Square-Imitation-Masked-Testing
Commit: 8d08a160efc75f91dc358a88cfa61334b799b06c
X SIM TO REAL WATCH THIS - UNSTACKING - GLOBAL CYCLE CONSISTENCY - no backprop - 2021-02-27
X ----------------------------------------------------------------------------------------
X killed because of human error, removing all blocks from scene during a run.
X export CUDA_VISIBLE_DEVICES="0" && python3 main.py --check_z_height --disable_two_step_backprop --obj_mesh_dir objects/blocks --num_obj 4 --common_sense --place --random_seed 1238 --max_test_trials 10 --task_type unstack --is_testing --use_demo --demo_path demos/unstacking_demos/ --stack_snapshot_file logs/base_models/stacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --vertical_square_snapshot_file logs/base_models/vertical_square_hist_densenet/snapshot.reinforcement_trial_success_rate_best_value.pth --row_snapshot_file logs/base_models/rows_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --grasp_only --depth_channels_history --cycle_consistency --no_common_sense_backprop --timeout 120
X Commit: 8eed6ccdc21d8be3b1ce4f387805e19766226709
SIM TO REAL WATCH THIS - UNSTACKING - GLOBAL CYCLE CONSISTENCY - no backprop - 2021-02-28
----------------------------------------------------------------------------------------
± export CUDA_VISIBLE_DEVICES="0" && python3 main.py --check_z_height --disable_two_step_backprop --obj_mesh_dir objects/blocks --num_obj 4 --common_sense --place --random_seed 1238 --max_test_trials 10 --task_type unstack --is_testing --use_demo --demo_path demos/unstacking_demos/ --stack_snapshot_file logs/base_models/stacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --vertical_square_snapshot_file logs/base_models/vertical_square_hist_densenet/snapshot.reinforcement_trial_success_rate_best_value.pth --row_snapshot_file logs/base_models/rows_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --grasp_only --depth_channels_history --cycle_consistency --no_common_sense_backprop
Commit: 5a02e9f1755db8ea78b944b78a7c9a06424a6143
Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-14-27-07_Real-Unstacking-Imitation-Masked-Testing-Three-Step-History/
On trial 5, ignore the first two actions, I didn't restack right away.
Note: 1 trial was counted as successful but was actually a failure.
Trial success rate: 90%
Action efficiencey: 82% (60/73)
> Max trial success rate: 1.0, at action iteration: 73. (total of 75 actions, max excludes first 73 actions)
> Max grasp success rate: 0.8974358974358975, at action iteration: 73. (total of 75 actions, max excludes first 73 actions)
> Max place success rate: 0.9714285714285714, at action iteration: 73. (total of 76 actions, max excludes first 73 actions)
> Max action efficiency: 1.9726027397260273, at action iteration: 75. (total of 76 actions, max excludes first 73 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-14-27-07_Real-Unstacking-Imitation-Masked-Testing-Three-Step-History/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-14-27-07_Real-Unstacking-Imitation-Masked-Testing-Three-Step-History/transitions/grasp-success-rate.log.csv
> saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-14-27-07_Real-Unstacking-Imitation-Masked-Testing-Three-Step-History/transitions/place-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-14-27-07_Real-Unstacking-Imitation-Masked-Testing-Three-Step-History/transitions/action-efficiency.log.csv
> saving plot: 2021-02-28-14-27-07_Real-Unstacking-Imitation-Masked-Testing-Three-Step-History-Real-Unstacking-Imitation-Masked-Testing-Three-Step-History_success_plot.png
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:439: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(save_file + file_format, dpi=300, optimize=True)
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:441: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(log_dir_fig_file + file_format, dpi=300, optimize=True)
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-14-27-07_Real-Unstacking-Imitation-Masked-Testing-Three-Step-History/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-14-27-07_Real-Unstacking-Imitation-Masked-Testing-Three-Step-History/best_stats.json
> Testing Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-14-27-07_Real-Unstacking-Imitation-Masked-Testing-Three-Step-History
> Testing results:
> {'trial_success_rate_best_value': 1.0, 'trial_success_rate_best_index': 73, 'grasp_success_rate_best_value': 0.8974358974358975, 'grasp_success_rate_best_index': 73, 'place_success_rate_best_value': 0.9714285714285714, 'place_success_rate_best_index': 73, 'action_efficiency_best_value': 1.9726027397260273, 'action_efficiency_best_index': 75}
SIM TO REAL WATCH THIS - STACKING - GLOBAL CYCLE CONSISTENCY - no backprop - 2021-02-27
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --obj_mesh_dir objects/blocks --num_obj 4 --num_extra_obj 4 --check_z_height --common_sense --place --random_seed 1238 --max_test_trials 10 --task_type stack --is_testing --use_demo --demo_path demos/stack_demos/ --row_snapshot_file logs/base_models/rows_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --vertical_square_snapshot_file logs/base_models/vertical_square_hist_densenet/snapshot.reinforcement_trial_success_rate_best_value.pth --unstack_snapshot_file logs/base_models/unstacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --grasp_only --depth_channels_history --cycle_consistency --disable_two_step_backprop
Commit: 14e709dcd2089f10491049de7f16a7918480b1e6
X CRASH (cancelled) Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-17-09-05_Real-Stack-Imitation-Masked-Testing-Three-Step-History
Creating data logging session: '/media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-17-19-14_Real-Stack-Imitation-Masked-Testing-Three-Step-History'
Trial 6 had a problem and might not have been reset before starting, skip that trial's score.
The final trial succeeded, for a total of 3/10 trial success
Efficiency: 25% 60/(269-30)
> Move to Home Position Complete
> Grasp successful: False
> Move to Home Position Complete
> WARNING: get_heightmap() depth_heightmap contains negative heights with min -0.02440699668561084, saved depth heightmap png files may be invalid! See README.md for instructions to collect the depth heightmap again. Clipping the minimum to 0 for now.
> prev_height: 0.0 max_z: 0.033565873317433235 goal_success: True needed to reset: False max_workspace_height: -0.02 <<<<<<<<<<<
> running check_stack_update_goal for grasp action
> prev_height: 1.0 max_z: 0.7384492129835312 goal_success: False needed to reset: False max_workspace_height: 0.56 <<<<<<<<<<<
> check_stack() stack_height: 0.7384492129835312 stack matches current goal: False partial_stack_success: False Does the code think a reset is needed: False Does the code think the stack toppled: None
> main.py() process_actions: place_success: False
> main.py() process_actions: partial_stack_success: False
> STACK: trial: 11 actions/partial: 4.285714285714286 actions/full stack: 90.0 (lower is better) Grasp Count: 156, grasp success rate: 0.7307692307692307 place_on_stack_rate: 0.5526315789473685 place_attempts: 114 partial_stack_successes: 63 stack_successes: 3 trial_success_rate: 0.2727272727272727 stack goal: [3 1] current_height: 0.7384492129835312
> Move to Home Position Complete
> Move to Home Position Complete
> trial_complete_indices: [ 29. 42. 53. 83. 113. 143. 173. 203. 233. 263. 269.]
> Max trial success rate: 0.2, at action iteration: 266. (total of 268 actions, max excludes first 266 actions)
> Max grasp success rate: 0.7402597402597403, at action iteration: 267. (total of 268 actions, max excludes first 266 actions)
> Max place success rate: 0.7280701754385965, at action iteration: 268. (total of 269 actions, max excludes first 266 actions)
> Max action efficiency: 0.06766917293233082, at action iteration: 268. (total of 269 actions, max excludes first 266 actions)
> saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-17-19-14_Real-Stack-Imitation-Masked-Testing-Three-Step-History/transitions/trial-success-rate.log.csv
> saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-17-19-14_Real-Stack-Imitation-Masked-Testing-Three-Step-History/transitions/grasp-success-rate.log.csv
> saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-17-19-14_Real-Stack-Imitation-Masked-Testing-Three-Step-History/transitions/place-success-rate.log.csv
> saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-17-19-14_Real-Stack-Imitation-Masked-Testing-Three-Step-History/transitions/action-efficiency.log.csv
> saving plot: 2021-02-28-17-19-14_Real-Stack-Imitation-Masked-Testing-Three-Step-History-Real-Stack-Imitation-Masked-Testing-Three-Step-History_success_plot.png
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:439: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(save_file + file_format, dpi=300, optimize=True)
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:441: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(log_dir_fig_file + file_format, dpi=300, optimize=True)
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-17-19-14_Real-Stack-Imitation-Masked-Testing-Three-Step-History/data/best_stats.json
> saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-17-19-14_Real-Stack-Imitation-Masked-Testing-Three-Step-History/best_stats.json
> Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-02-28-17-19-14_Real-Stack-Imitation-Masked-Testing-Three-Step-History
> XX Training results:
> XX {'trial_success_rate_best_value': 0.2, 'trial_success_rate_best_index': 266, 'grasp_success_rate_best_value': 0.7402597402597403, 'grasp_success_rate_best_index': 267, 'place_success_rate_best_value': 0.7280701754385965, 'place_success_rate_best_index': 268, 'action_efficiency_best_value': 0.06766917293233082, 'action_efficiency_best_index': 268}
With corrections from human observation:
> Testing results:
> {'trial_success_rate_best_value': 0.3, 'trial_success_rate_best_index': 266, 'grasp_success_rate_best_value': 0.7402597402597403, 'grasp_success_rate_best_index': 267, 'place_success_rate_best_value': 0.7280701754385965, 'place_success_rate_best_index': 268, 'action_efficiency_best_value': 0.2510460251, 'action_efficiency_best_index': 268}
SIM TO REAL WATCH THIS - ROWS - GLOBAL CYCLE CONSISTENCY - no backprop - 2021-02-27
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --obj_mesh_dir objects/blocks --num_obj 4 --check_z_height --common_sense --place --random_seed 1238 --max_test_trials 10 --task_type row --is_testing --use_demo --demo_path demos/row_demos/ --stack_snapshot_file logs/base_models/stacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --vertical_square_snapshot_file logs/base_models/vertical_square_hist_densenet/snapshot.reinforcement_trial_success_rate_best_value.pth --unstack_snapshot_file logs/base_models/unstacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --grasp_only --depth_channels_history --cycle_consistency --disable_two_step_backprop
Creating data logging session: /home/costar/src/real_good_robot/logs/2021-03-02-18-33-42_Real-Row-Imitation-Masked-Testing-Three-Step-History/
Commit: 1bd62ebd45e7c35bca0f9fda6323213bd4852593
> main.py() process_actions: partial_stack_success: False
> STACK: trial: 12 actions/partial: 3.5135135135135136 actions/full stack: 21.666666666666668 (lower is better) Grasp Count: 78, grasp success rate: 0.6666666666666666 place_on_stack_rate: 0.7115384615384616 place_attempts: 52 partial_stack_successes: 37 stack_successes: 6 trial_success_rate: 0.5 stack goal: [3 1 2] current_height: 1.0
> Move to Home Position Complete
> Move to Home Position Complete
> trial_complete_indices: [ 4. 10. 12. 20. 25. 25. 40. 70. 74. 79. 105. 129.]
> Max trial success rate: 0.5454545454545454, at action iteration: 126. (total of 128 actions, max excludes first 126 actions)
> Max grasp success rate: 0.6842105263157895, at action iteration: 126. (total of 128 actions, max excludes first 126 actions)
> Max place success rate: 0.7115384615384616, at action iteration: 127. (total of 129 actions, max excludes first 126 actions)
> Max action efficiency: 0.2857142857142857, at action iteration: 126. (total of 129 actions, max excludes first 126 actions)
> saving trial success rate: /home/costar/src/real_good_robot/logs/2021-03-02-18-33-42_Real-Row-Imitation-Masked-Testing-Three-Step-History/transitions/trial-success-rate.log.csv
> saving grasp success rate: /home/costar/src/real_good_robot/logs/2021-03-02-18-33-42_Real-Row-Imitation-Masked-Testing-Three-Step-History/transitions/grasp-success-rate.log.csv
> saving place success rate: /home/costar/src/real_good_robot/logs/2021-03-02-18-33-42_Real-Row-Imitation-Masked-Testing-Three-Step-History/transitions/place-success-rate.log.csv
> saving action efficiency: /home/costar/src/real_good_robot/logs/2021-03-02-18-33-42_Real-Row-Imitation-Masked-Testing-Three-Step-History/transitions/action-efficiency.log.csv
> saving plot: 2021-03-02-18-33-42_Real-Row-Imitation-Masked-Testing-Three-Step-History-Real-Row-Imitation-Masked-Testing-Three-Step-History_success_plot.png
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:439: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(save_file + file_format, dpi=300, optimize=True)
> /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:441: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
> plt.savefig(log_dir_fig_file + file_format, dpi=300, optimize=True)
> saving best stats to: /home/costar/src/real_good_robot/logs/2021-03-02-18-33-42_Real-Row-Imitation-Masked-Testing-Three-Step-History/data/best_stats.json
> saving best stats to: /home/costar/src/real_good_robot/logs/2021-03-02-18-33-42_Real-Row-Imitation-Masked-Testing-Three-Step-History/best_stats.json
> Training Complete! Dir: /home/costar/src/real_good_robot/logs/2021-03-02-18-33-42_Real-Row-Imitation-Masked-Testing-Three-Step-History
> Training results:
> {'trial_success_rate_best_value': 0.5454545454545454, 'trial_success_rate_best_index': 126, 'grasp_success_rate_best_value': 0.6842105263157895, 'grasp_success_rate_best_index': 126, 'place_success_rate_best_value': 0.7115384615384616, 'place_success_rate_best_index': 127, 'action_efficiency_best_value': 0.2857142857142857, 'action_efficiency_best_index': 126}
We looked at the images from above and manually counted, this left the trial success rate at 30%.
There were also two trials where there was a code crash due to a typo, two of trials 4, 5, 6, and we removed the relevant two trials from being counted.
X SIM TO REAL WATCH THIS - VERTICAL SQUARE - depth time fix - SIM-TO-REAL now watch this imitation - NO backprop - with cycle consistency - 2021-03-03
X ----------------------------------------------------------------------------------------
X export CUDA_VISIBLE_DEVICES="0" && python3 main.py --check_z_height --obj_mesh_dir objects/blocks --num_obj 4 --common_sense --place --random_seed 1238 --max_test_trials 10 --task_type vertical_square --is_testing --use_demo --demo_path demos/vertical_square_demos/ --stack_snapshot_file logs/base_models/stacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --unstack_snapshot_file logs/base_models/unstacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --row_snapshot_file logs/base_models/rows_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --grasp_only --cycle_consistency --disable_two_step_backprop
X Commit: 5b738b22f279e862dde5df914421d0dd7436f026
X GPU 0, Tab 0, port 20000, bottom right v-rep window, v-rep tab 10
X
X STOPPED to run config below (limited time)
W WARNING SINGLE TIME STEP INPUT WITH THREE TIME STEP MODEL - SIM TO REAL WATCH THIS - VERTICAL SQUARE - depth time fix - SIM-TO-REAL now watch this imitation - NO backprop - with cycle consistency - 2021-03-03
W ----------------------------------------------------------------------------------------
W export CUDA_VISIBLE_DEVICES="0" && python3 main.py --check_z_height --obj_mesh_dir objects/blocks --num_obj 4 --common_sense --place --random_seed 1238 --max_test_trials 10 --task_type vertical_square --is_testing --use_demo --demo_path demos/vertical_square_demos/ --stack_snapshot_file logs/base_models/stacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --unstack_snapshot_file logs/base_models/unstacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --row_snapshot_file logs/base_models/rows_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --grasp_only --cycle_consistency
W Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-03-03-16-59-18_Real-Vertical-Square-Imitation-Masked-Testing
W Commit: 5b738b22f279e862dde5df914421d0dd7436f026
W GPU 0, Tab 0, port 20000, bottom right v-rep window, v-rep tab 10
W
W > For task VERTICAL_SQUARE input current structure size: 1
W > check_stack() stack_height: 1.0 stack matches current goal: False partial_stack_success: False Does the code think a reset is needed: False Does the code think the stack toppled: None
W > main.py() process_actions: place_success: False
W > main.py() process_actions: partial_stack_success: False
W > STACK: trial: 11 actions/partial: 4.8936170212765955 actions/full stack: 230.0 (lower is better) Grasp Count: 136, grasp success rate: 0.6911764705882353 place_on_stack_rate: 0.5 place_attempts: 94 partial_stack_successes: 47 stack_successes: 1 trial_success_rate: 0.09090909090909091 stack goal: [3 1 2] current_height: 1.0
W > Move to Home Position Complete
W > Move to Home Position Complete
W > trial_complete_indices: [ 29. 38. 68. 81. 111. 134. 164. 184. 196. 217. 229.]
W > Max trial success rate: 0.1, at action iteration: 226. (total of 228 actions, max excludes first 226 actions)
W > Max grasp success rate: 0.7014925373134329, at action iteration: 227. (total of 228 actions, max excludes first 226 actions)
W > Max place success rate: 0.5053763440860215, at action iteration: 226. (total of 229 actions, max excludes first 226 actions)
W > Max action efficiency: 0.02654867256637168, at action iteration: 226. (total of 229 actions, max excludes first 226 actions)
W > saving trial success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-03-03-16-59-18_Real-Vertical-Square-Imitation-Masked-Testing/transitions/trial-success-rate.log.csv
W > saving grasp success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-03-03-16-59-18_Real-Vertical-Square-Imitation-Masked-Testing/transitions/grasp-success-rate.log.csv
W > saving place success rate: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-03-03-16-59-18_Real-Vertical-Square-Imitation-Masked-Testing/transitions/place-success-rate.log.csv
W > saving action efficiency: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-03-03-16-59-18_Real-Vertical-Square-Imitation-Masked-Testing/transitions/action-efficiency.log.csv
W > saving plot: 2021-03-03-16-59-18_Real-Vertical-Square-Imitation-Masked-Testing-Real-Vertical-Square-Imitation-Masked-Testing_success_plot.png
W > /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:439: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
W > plt.savefig(save_file + file_format, dpi=300, optimize=True)
W > /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/plot.py:441: MatplotlibDeprecationWarning: savefig() got unexpected keyword argument "optimize" which is no longer supported as of 3.3 and will become an error two minor releases later
W > plt.savefig(log_dir_fig_file + file_format, dpi=300, optimize=True)
W > saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-03-03-16-59-18_Real-Vertical-Square-Imitation-Masked-Testing/data/best_stats.json
W > saving best stats to: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-03-03-16-59-18_Real-Vertical-Square-Imitation-Masked-Testing/best_stats.json
W > Training Complete! Dir: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-03-03-16-59-18_Real-Vertical-Square-Imitation-Masked-Testing
W > Training results:
W > {'trial_success_rate_best_value': 0.1, 'trial_success_rate_best_index': 226, 'grasp_success_rate_best_value': 0.7014925373134329, 'grasp_success_rate_best_index': 227, 'place_success_rate_best_value': 0.5053763440860215, 'place_success_rate_best_index': 226, 'action_efficiency_best_value': 0.02654867256637168, 'action_efficiency_best_index': 226}
W
W Efficiency correction: (10*6)/226
SIM TO REAL WATCH THIS - VERTICAL SQUARE - depth time fix - SIM-TO-REAL now watch this imitation - WITH backprop - with cycle consistency - 2021-03-03
----------------------------------------------------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --check_z_height --obj_mesh_dir objects/blocks --num_obj 4 --common_sense --place --random_seed 1238 --max_test_trials 10 --task_type vertical_square --is_testing --use_demo --demo_path demos/vertical_square_demos/ --stack_snapshot_file logs/base_models/stacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --unstack_snapshot_file logs/base_models/unstacking_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --row_snapshot_file logs/base_models/rows_hist_densenet/snapshot.reinforcement_action_efficiency_best_value.pth --grasp_only --cycle_consistency --depth_channels_history --no_common_sense_backprop --future_reward_discount 0.65
Commit: 5b738b22f279e862dde5df914421d0dd7436f026
GPU 0, Tab 0, port 20000, bottom right v-rep window, v-rep tab 10
================================
2021-06-10 LANGUAGE MODEL tests
================================
STACK LANGUAGE SIM TEST, REAL MODEL 2021-06-11
----------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_testing --obj_mesh_dir objects/blocks --num_obj 8 --common_sense --place --static_language_mask --language_model_config language_models/stacks/config.yaml --language_model_weights language_models/stacks/best.th --snapshot_file logs/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training-best-model-good-robot-paper/snapshot.reinforcement_trial_success_rate_best_index.pth --goal_num_obj=4 --is_sim --human_annotation --task_type stack --timeout 6000
Commit: 75cd6b81d8e1e8c616a2b3e6ade1cdfc31f66765
STACK LANGUAGE SIM TEST, SIM MODEL 2021-06-11
----------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_testing --obj_mesh_dir objects/blocks --num_obj 8 --common_sense --place --static_language_mask --language_model_config language_models/sim_stacks/config.yaml --language_model_weights language_models/sim_stacks/best.th --snapshot_file logs/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training-best-model-good-robot-paper/snapshot.reinforcement_trial_success_rate_best_index.pth --goal_num_obj=4 --is_sim --human_annotation --task_type stack --timeout 6000
Commit: 75cd6b81d8e1e8c616a2b3e6ade1cdfc31f66765
ROW LANGUAGE SIM TEST, SIM MODEL 2021-06-12
---------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_testing --obj_mesh_dir objects/blocks --num_obj 4 --static_language_mask --language_model_config language_models/sim_rows/config.yaml --language_model_weights language_models/sim_rows/best.th --snapshot_file logs/2020-06-03-12-05-28_Sim-Rows-Two-Step-Reward-Masked-Training/models/snapshot.reinforcement_trial_success_rate_best_index.pth --end_on_incorrect_order --check_row --timeout 100000 --place --separation_threshold 0.08 --distance_threshold 0.04 --is_sim --human_annotation --task_type row --common_sense --timeout 6000
Commit: 6116224e54c28697f378c8a867d10cad8f054d34
X STACK LANGUAGE REAL TEST, REAL MODEL 2021-06-14
X -----------------------------------------------
X export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_testing --obj_mesh_dir objects/blocks --num_obj 4 --num_extra_obj 4 --common_sense --place --static_language_mask --language_model_config language_models/stacks/config.yaml --language_model_weights language_models/stacks/best.th --snapshot_file logs/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training-best-model-good-robot-paper/snapshot.reinforcement_trial_success_rate_best_index.pth --goal_num_obj=4 --human_annotation --task_type stack --common_sense --timeout 6000 --random_seed 1238 --max_test_trials 10
X Commit: cf83d58f2414cfa99cfbedd3dfbe45282914d289
X Logged 2 trials, useful for reference but not for results. Killed bc the human lables could not be set on progress reversal, i.e. it would keep trying to place red on green if a stack toppled and the command needed to change.
X Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-06-14-21-18-09_Real-Stack-Two-Step-Reward-Masked-Testing
X Killed bc num_obj and num_extra_obj was misconfigured
X Creating data logging session: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-06-14-21-09-41_Real-Stack-Two-Step-Reward-Masked-Testing
X STACK LANGUAGE REAL TEST, REAL MODEL 2021-06-15
X -----------------------------------------------
X export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_testing --obj_mesh_dir objects/blocks --num_obj 4 --num_extra_obj 4 --common_sense --place --static_language_mask --language_model_config language_models/stacks/config.yaml --language_model_weights language_models/stacks/best.th --snapshot_file logs/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training-best-model-good-robot-paper/snapshot.reinforcement_trial_success_rate_best_index.pth --goal_num_obj=4 --human_annotation --task_type stack --common_sense --timeout 6000 --random_seed 1238 --max_test_trials 10
X Commit: 0f450b75097a3a1d079905fc4476a7a1405dfa36
X
X Logged 2 trials, useful for reference but not for results. Killed bc the human lables could not be set on progress reversal, i.e. it would keep trying to place red on green if a stack toppled and the command needed to change.
X Snapshot file: /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training/
X STACK LANGUAGE REAL TEST, REAL MODEL 2021-06-15
X -----------------------------------------------
X export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_testing --obj_mesh_dir objects/blocks --num_obj 4 --num_extra_obj 4 --common_sense --place --static_language_mask --language_model_config language_models/stacks/config.yaml --language_model_weights language_models/stacks/best.th --snapshot_file logs/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training-best-model-good-robot-paper/snapshot.reinforcement_trial_success_rate_best_index.pth --goal_num_obj=4 --human_reset --human_annotation --task_type stack --common_sense --timeout 6000 --random_seed 1238 --max_test_trials 10
X Commit: bd67b2f14d5f91b5ecbbf0fd7ef118aa86f7ce79
X Logged 3 trials, but goal color order didn't reshuffle correctly
X /media/costar/f5f1f858-3666-4832-beea-b743127f1030/real_good_robot/logs/2021-06-15-11-47-04_Real-Stack-Two-Step-Reward-Masked-Testing
STACK LANGUAGE REAL TEST, REAL MODEL 2021-06-16
-----------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_testing --obj_mesh_dir objects/blocks --num_obj 4 --num_extra_obj 4 --common_sense --place --static_language_mask --language_model_config language_models/stacks/config.yaml --language_model_weights language_models/stacks/best.th --snapshot_file logs/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training/2020-06-07-21-42-16_Sim-Stack-SPOT-Trial-Reward-Masked-Training-best-model-good-robot-paper/snapshot.reinforcement_trial_success_rate_best_index.pth --goal_num_obj=4 --human_reset --human_annotation --task_type stack --common_sense --timeout 6000 --random_seed 1239 --max_test_trials 10
Commit: bbc134835e782cf711a2c626328072f1c2653453
ROW LANGUAGE REAL TEST, SIM MODEL 2021-06-16
--------------------------------------------
export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_testing --obj_mesh_dir objects/blocks --num_obj 4 --static_language_mask --language_model_config language_models/rows/config.yaml --language_model_weights language_models/rows/best.th --snapshot_file logs/2020-06-03-12-05-28_Sim-Rows-Two-Step-Reward-Masked-Training/models/snapshot.reinforcement_trial_success_rate_best_index.pth --check_row --timeout 100000 --place --separation_threshold 0.08 --distance_threshold 0.04 --human_reset --human_annotation --task_type row --common_sense --timeout 6000 --random_seed 1239 --max_test_trials 10
Commit: bbc134835e782cf711a2c626328072f1c2653453
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。