DAIR: Disentangled Attention as Intrinsic Regularization
for Safe and Efficient Bimanual Manipulation

Minghao Zhang*, Pingcheng Jian*, Yi Wu, Huazhe Xu, Xiaolong Wang

arxiv   code (coming soon)

Summary

We address the problem of solving complex bimanual robot manipulation tasks on multiple objects with sparse rewards. Such complex tasks can be decomposed into sub-tasks that are accomplishable by different robots concurrently or sequentially for better efficiency. While previous reinforcement learning approaches primarily focus on modeling the compositionality of sub-tasks, two fundamental issues are largely ignored particularly when learning cooperative strategies for two robots: (i) domination, i.e., one robot may try to solve a task by itself and leaves the other idle; (ii) conflict, i.e., one robot can easily interrupt another's workspace when executing different sub-tasks simultaneously. To tackle these two issues, we propose a novel technique called disentangled attention, which provides an intrinsic regularization for two robots to focus on separate sub-tasks and objects. We evaluate our method on four bimanual manipulation tasks. Experimental results show that our proposed intrinsic regularization successfully avoids domination and reduces conflicts for the policies, which leads to significantly more effective cooperative strategies than all the baselines.

Short Description Video

Method

Our goal is to design a model and introduce a novel intrinsic regularization to better train the policy for bimanual manipulation tasks with many objects. We hope the agents can automatically learn to allocate the workload, and should also avoid the problems of domination and conflict. We use self-attention architecture to combine all embedded representations from agents and objects. Based on this architecture, the intrinsic loss is computed from the attention probability and encourages the agents to attend to different sub-tasks.

Results

Three Blocks Rearrangement

Attention Baseline Disentangled Attention (Ours)

Eight Blocks Rearrangement

Attention Baseline Disentangled Attention (Ours)

Two Blocks Stacking

Attention Baseline Disentangled Attention (Ours)

Three Blocks Stacking

Attention Baseline Disentangled Attention (Ours)

Two Tower Stacking

Attention Baseline Disentangled Attention (Ours)

Open Box and Place

Attention Baseline Disentangled Attention (Ours)

Push with Door

Attention Baseline Disentangled Attention (Ours)

Lift Bar

This task shows the synergistic skill of our method. Though we leverage disentangled attention mechanism, agents can still discover synergistic behaviors.

Attention Baseline Disentangled Attention (Ours)

Bibtex

@article{zhang2021disentangled, title={DAIR: Disentangled Attention Intrinsic Regularization for Safe and Efficient Bimanual Manipulation}, author={Zhang, Minghao and Jian, Pingcheng and Wu, Yi and Xu, Huazhe and Wang, Xiaolong}, journal={arXiv preprint arXiv:2106.05907}, year={2021} }
Correspondence to Minghao Zhang