Skip to content

Implementation of **Learning of Efficient Stable Robot Grasping Approach Using Transformer-based Control Policy**

Notifications You must be signed in to change notification settings

enyen/NewStableTactileGrasp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning of Efficient Stable Robot Grasping Approach Using Transformer-based Control Policy

This repo contains the implementation of Learning of Efficient Stable Robot Grasping Approach Using Transformer-based Control Policy (ICIEA 2024 Best Paper Finalist) by En Yen Puang, Zechen Li, Chee Meng Chew, Shan Luo, Yan Wu.

[ Paper ] [ Page ] [ Slides ]

intro

Table-of-Contents

  1. Installation
  2. Run the Examples
  3. On Real UR5 & Sensor
  4. Citation

Installation

Code adopted from source.

  1. Clone the project:

    git clone [email protected]:enyen/TactileSimulation.git
  2. Clone the submodule:

    mkdir -p externals/DiffHand
    git clone https://github.com/eanswer/DiffHand.git externals/DiffHand
  3. Install CMake >= 3.1.0

  4. Create conda environment

    conda create -n tactile_sim python=3.9
    conda activate tactile_sim
    pip install torch torchvision scikit-learn opencv-python einops stable_baselines3 tensorboard scipy pyyaml tqdm rich matplotlib pybind11 math3d=3.4.1 git+https://github.com/enyen/python-urx
  5. Install DiffRedMax

    sudo apt-get install freeglut3-dev libglfw3-dev libxinerama-dev livxcursor-dev libxi-dev libxxf86vm-dev   
    cd externals/DiffHand/core
    python setup.py install

Run the Examples

Training in simulation:

cd examples/UnstableGraspExp
python train_sb3.py

Testing in simulation using model saved in ug_datetime:

python train_sb3.py ./storage/ug_datetime.zip vis_mode

vis_mode can be either:

  1. 'None' -> no visualization, just statistic
  2. 'show' -> visualize every steps
  3. 'record' -> produce a video of the whole episode

Visualize training progress using tensorboard:

tensorboard --logdir log

Getting normalization stats:

# cd bash

from unstable_grasp_env import UnstableGraspEnv
env = UnstableGraspEnv()
env.data_stat()
# update self.tactile_means and self.tactile_stds manually inside __init__.

unstable_grasp


On Real UR5 & Sensor

Build marker flow library (adopted from source).

cd examples/UnstableGraspExp/marker_flow
make

Test marker flow

 # cd examples/UnstableGraspExp

 from marker_flow.marker_flow import MarkerFlow
 mf = MarkerFlow()
 #  ...  follow through to select camera id
 mf._run(debug=True, collect=False)
 #  ...  view marker flow visualization, ctrl-c to stop

real_grasp

Collect sensor means and stds:

cd examples/UnstableGraspExp
python test_ur5.py

Actual testing:

python test_ur5.py ./storage/ug_datetime

Citation

Please cite our paper if you use this code.

 @misc{puang2024learningstablerobotgrasping,
  title={Learning Stable Robot Grasping with Transformer-based Tactile Control Policies}, 
  author={En Yen Puang and Zechen Li and Chee Meng Chew and Shan Luo and Yan Wu},
  year={2024},
  eprint={2407.21172},
  archivePrefix={arXiv},
  primaryClass={cs.RO},
  url={https://arxiv.org/abs/2407.21172}, 
 }

About

Implementation of **Learning of Efficient Stable Robot Grasping Approach Using Transformer-based Control Policy**

Resources

Stars

Watchers

Forks

Languages