-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
2aed7d4
commit 993c422
Showing
9 changed files
with
161 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
--- | ||
# Documentation: https://wowchemy.com/docs/managing-content/ | ||
|
||
title: 'C·ASE: Learning Conditional Adversarial Skill Embeddings for Physics-based Characters' | ||
subtitle: '' | ||
summary: '' | ||
authors: | ||
- Zhiyang Dou | ||
- Xuelin Chen | ||
- Qingnan Fan | ||
- Taku Komura | ||
- Wenping Wang | ||
|
||
tags: | ||
- 'Physics-based Character Animation' | ||
- 'GAIL' | ||
- 'RL' | ||
|
||
categories: [] | ||
date: '2023-12-01' | ||
lastmod: 2021-01-15T21:34:50Z | ||
featured: false | ||
draft: false | ||
|
||
# Featured image | ||
# To use, add an image named `featured.jpg/png` to your page's folder. | ||
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight. | ||
image: | ||
caption: '' | ||
focal_point: '' | ||
preview_only: false | ||
|
||
# Projects (optional). | ||
# Associate this post with one or more of your projects. | ||
# Simply enter your project's folder or file name without extension. | ||
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`. | ||
# Otherwise, set `projects = []`. | ||
projects: [] | ||
publishDate: '2021-01-15T21:34:50.388741Z' | ||
publication_types: | ||
# 1 Conference paper | ||
# 2 Journal article | ||
# 3 Preprint | ||
# 4 Report | ||
# 5 Book | ||
# 6 Book section | ||
# 7 Thesis | ||
# 8 Patent | ||
- '1' | ||
abstract: We present C·ASE, an efficient and effective framework that learns Conditional Adversarial Skill Embeddings for physics-based characters. C·ASE enables the physically simulated character to learn a diverse repertoire of skills while providing controllability in the form of direct manipulation of the skills to be performed. This is achieved by dividing the heterogeneous skill motions into distinct subsets containing homogeneous samples for training a low-level conditional model to learn the conditional behavior distribution. The skill-conditioned imitation learning naturally offers explicit control over the character’s skills after training. The training course incorporates the focal skill sampling, skeletal residual forces, and element-wise feature masking to balance diverse skills of varying complexities, mitigate dynamics mismatch to master agile motions and capture more general behavior characteristics, respectively. Once trained, the conditional model can produce highly diverse and realistic skills, outperforming state-of-the-art models, and can be repurposed in various downstream tasks. In particular, the explicit skill control handle allows a high-level policy or a user to direct the character with desired skill specifications, which we demonstrate is advantageous for interactive character animation. | ||
publication: 'SIGGRAPH Asia 2023' | ||
--- |
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
--- | ||
# Documentation: https://wowchemy.com/docs/managing-content/ | ||
|
||
title: 'Coverage Axis: Inner Point Selection for 3D Shape Skeletonization' | ||
subtitle: '' | ||
summary: '' | ||
authors: | ||
- Zhiyang Dou | ||
- Cheng Lin | ||
- Rui Xu | ||
- Lei Yang | ||
- Shiqing Xin | ||
- Taku Komura | ||
- Wenping Wang | ||
|
||
|
||
tags: | ||
- 'Geometric Modeling' | ||
- 'Medial Axis Transform' | ||
|
||
categories: [] | ||
date: '2022-04-01' | ||
lastmod: 2021-01-15T21:34:50Z | ||
featured: false | ||
draft: false | ||
|
||
# Featured image | ||
# To use, add an image named `featured.jpg/png` to your page's folder. | ||
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight. | ||
image: | ||
caption: '' | ||
focal_point: '' | ||
preview_only: false | ||
|
||
# Projects (optional). | ||
# Associate this post with one or more of your projects. | ||
# Simply enter your project's folder or file name without extension. | ||
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`. | ||
# Otherwise, set `projects = []`. | ||
projects: [] | ||
publishDate: '2021-01-15T21:34:50.388741Z' | ||
publication_types: | ||
# 1 Conference paper | ||
# 2 Journal article | ||
# 3 Preprint | ||
# 4 Report | ||
# 5 Book | ||
# 6 Book section | ||
# 7 Thesis | ||
# 8 Patent | ||
- '2' | ||
abstract: In this paper, we present a simple yet effective formulation called Coverage Axis for 3D shape skeletonization. Inspired by the set cover problem, our key idea is to cover all the surface points using as few inside medial balls as possible. This formulation inherently induces a compact and expressive approximation of the Medial Axis Transform (MAT) of a given shape. Different from previous methods that rely on local approximation error, our method allows a global consideration of the overall shape structure, leading to an efficient high-level abstraction and superior robustness to noise. Another appealing aspect of our method is its capability to handle more generalized input such as point clouds and poor-quality meshes. Extensive comparisons and evaluations demonstrate the remarkable effectiveness of our method for generating compact and expressive skeletal representation to approximate the MAT. | ||
publication: 'EUROGRAPHICS 2022' | ||
--- |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
--- | ||
# Documentation: https://wowchemy.com/docs/managing-content/ | ||
|
||
title: 'TORE: Token Reduction for Efficient Human Mesh Recovery with Transformer' | ||
subtitle: '' | ||
summary: '' | ||
authors: | ||
- Zhiyang Dou* | ||
- Qingxuan Wu* | ||
- Cheng Lin | ||
- Zeyu Cao | ||
- Qiangqiang Wu | ||
- Weilin Wan | ||
- Taku Komura | ||
- Wenping Wang | ||
|
||
tags: | ||
- '3D human reconstruction' | ||
- 'Token reduction' | ||
|
||
categories: [] | ||
date: '2023-06-01' | ||
lastmod: 2021-01-15T21:34:50Z | ||
featured: false | ||
draft: false | ||
|
||
# Featured image | ||
# To use, add an image named `featured.jpg/png` to your page's folder. | ||
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight. | ||
image: | ||
caption: '' | ||
focal_point: '' | ||
preview_only: false | ||
|
||
# Projects (optional). | ||
# Associate this post with one or more of your projects. | ||
# Simply enter your project's folder or file name without extension. | ||
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`. | ||
# Otherwise, set `projects = []`. | ||
projects: [] | ||
publishDate: '2021-01-15T21:34:50.388741Z' | ||
publication_types: | ||
# 1 Conference paper | ||
# 2 Journal article | ||
# 3 Preprint | ||
# 4 Report | ||
# 5 Book | ||
# 6 Book section | ||
# 7 Thesis | ||
# 8 Patent | ||
- '1' | ||
abstract: In this paper, we introduce a set of simple yet effective TOken REduction (TORE) strategies for Transformer-based Human Mesh Recovery from monocular images. Current SOTA performance is achieved by Transformer-based structures. However, they suffer from high model complexity and computation cost caused by redundant tokens. We propose token reduction strategies based on two important aspects, i.e., the 3D geometry structure and 2D image feature, where we hierarchically recover the mesh geometry with priors from body structure and conduct token clustering to pass fewer but more discriminative image feature tokens to the Transformer. Our method massively reduces the number of tokens involved in high-complexity interactions in the Transformer. This leads to a significantly reduced computational cost while still achieving competitive or even higher accuracy in shape recovery. Extensive experiments across a wide range of benchmarks validate the superior effectiveness of the proposed method. We further demonstrate the generalizability of our method on hand mesh recovery. Our code will be publicly available once the paper is published. | ||
publication: 'ICCV 2023' | ||
--- |