diff --git a/content/authors/Zhiyang-Dou/_index.md b/content/authors/Zhiyang-Dou/_index.md index 34d4ebe..414b30f 100644 --- a/content/authors/Zhiyang-Dou/_index.md +++ b/content/authors/Zhiyang-Dou/_index.md @@ -38,7 +38,7 @@ organizations: interests: - Character Animation --
Geometric Computing +- Geometric Computing education: courses: diff --git a/content/publication/.DS_Store b/content/publication/.DS_Store new file mode 100644 index 0000000..abaaead Binary files /dev/null and b/content/publication/.DS_Store differ diff --git a/content/publication/case-2023/featured.png b/content/publication/case-2023/featured.png new file mode 100644 index 0000000..9e3c4a3 Binary files /dev/null and b/content/publication/case-2023/featured.png differ diff --git a/content/publication/case-2023/index.md b/content/publication/case-2023/index.md new file mode 100644 index 0000000..1044688 --- /dev/null +++ b/content/publication/case-2023/index.md @@ -0,0 +1,52 @@ +--- +# Documentation: https://wowchemy.com/docs/managing-content/ + +title: 'C·ASE: Learning Conditional Adversarial Skill Embeddings for Physics-based Characters' +subtitle: '' +summary: '' +authors: +- Zhiyang Dou +- Xuelin Chen +- Qingnan Fan +- Taku Komura +- Wenping Wang + +tags: +- 'Physics-based Character Animation' +- 'GAIL' +- 'RL' + +categories: [] +date: '2023-12-01' +lastmod: 2021-01-15T21:34:50Z +featured: false +draft: false + +# Featured image +# To use, add an image named `featured.jpg/png` to your page's folder. +# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight. +image: + caption: '' + focal_point: '' + preview_only: false + +# Projects (optional). +# Associate this post with one or more of your projects. +# Simply enter your project's folder or file name without extension. +# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`. +# Otherwise, set `projects = []`. +projects: [] +publishDate: '2021-01-15T21:34:50.388741Z' +publication_types: +# 1 Conference paper +# 2 Journal article +# 3 Preprint +# 4 Report +# 5 Book +# 6 Book section +# 7 Thesis +# 8 Patent +- '1' +abstract: We present C·ASE, an efficient and effective framework that learns Conditional Adversarial Skill Embeddings for physics-based characters. C·ASE enables the physically simulated character to learn a diverse repertoire of skills while providing controllability in the form of direct manipulation of the skills to be performed. This is achieved by dividing the heterogeneous skill motions into distinct subsets containing homogeneous samples for training a low-level conditional model to learn the conditional behavior distribution. The skill-conditioned imitation learning naturally offers explicit control over the character’s skills after training. The training course incorporates the focal skill sampling, skeletal residual forces, and element-wise feature masking to balance diverse skills of varying complexities, mitigate dynamics mismatch to master agile motions and capture more general behavior characteristics, respectively. Once trained, the conditional model can produce highly diverse and realistic skills, outperforming state-of-the-art models, and can be repurposed in various downstream tasks. In particular, the explicit skill control handle allows a high-level policy or a user to direct the character with desired skill specifications, which we demonstrate is advantageous for interactive character animation. +publication: 'SIGGRAPH Asia 2023' +--- diff --git a/content/publication/cat-2022/.DS_Store b/content/publication/cat-2022/.DS_Store new file mode 100644 index 0000000..5008ddf Binary files /dev/null and b/content/publication/cat-2022/.DS_Store differ diff --git a/content/publication/cat-2022/featured.png b/content/publication/cat-2022/featured.png new file mode 100644 index 0000000..ab696fb Binary files /dev/null and b/content/publication/cat-2022/featured.png differ diff --git a/content/publication/cat-2022/index.md b/content/publication/cat-2022/index.md new file mode 100644 index 0000000..98c5b5c --- /dev/null +++ b/content/publication/cat-2022/index.md @@ -0,0 +1,54 @@ +--- +# Documentation: https://wowchemy.com/docs/managing-content/ + +title: 'Coverage Axis: Inner Point Selection for 3D Shape Skeletonization' +subtitle: '' +summary: '' +authors: +- Zhiyang Dou +- Cheng Lin +- Rui Xu +- Lei Yang +- Shiqing Xin +- Taku Komura +- Wenping Wang + + +tags: +- 'Geometric Modeling' +- 'Medial Axis Transform' + +categories: [] +date: '2022-04-01' +lastmod: 2021-01-15T21:34:50Z +featured: false +draft: false + +# Featured image +# To use, add an image named `featured.jpg/png` to your page's folder. +# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight. +image: + caption: '' + focal_point: '' + preview_only: false + +# Projects (optional). +# Associate this post with one or more of your projects. +# Simply enter your project's folder or file name without extension. +# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`. +# Otherwise, set `projects = []`. +projects: [] +publishDate: '2021-01-15T21:34:50.388741Z' +publication_types: +# 1 Conference paper +# 2 Journal article +# 3 Preprint +# 4 Report +# 5 Book +# 6 Book section +# 7 Thesis +# 8 Patent +- '2' +abstract: In this paper, we present a simple yet effective formulation called Coverage Axis for 3D shape skeletonization. Inspired by the set cover problem, our key idea is to cover all the surface points using as few inside medial balls as possible. This formulation inherently induces a compact and expressive approximation of the Medial Axis Transform (MAT) of a given shape. Different from previous methods that rely on local approximation error, our method allows a global consideration of the overall shape structure, leading to an efficient high-level abstraction and superior robustness to noise. Another appealing aspect of our method is its capability to handle more generalized input such as point clouds and poor-quality meshes. Extensive comparisons and evaluations demonstrate the remarkable effectiveness of our method for generating compact and expressive skeletal representation to approximate the MAT. +publication: 'EUROGRAPHICS 2022' +--- diff --git a/content/publication/tore-2023/featured.png b/content/publication/tore-2023/featured.png new file mode 100644 index 0000000..9e3c4a3 Binary files /dev/null and b/content/publication/tore-2023/featured.png differ diff --git a/content/publication/tore-2023/index.md b/content/publication/tore-2023/index.md new file mode 100644 index 0000000..fda05f4 --- /dev/null +++ b/content/publication/tore-2023/index.md @@ -0,0 +1,54 @@ +--- +# Documentation: https://wowchemy.com/docs/managing-content/ + +title: 'TORE: Token Reduction for Efficient Human Mesh Recovery with Transformer' +subtitle: '' +summary: '' +authors: +- Zhiyang Dou* +- Qingxuan Wu* +- Cheng Lin +- Zeyu Cao +- Qiangqiang Wu +- Weilin Wan +- Taku Komura +- Wenping Wang + +tags: +- '3D human reconstruction' +- 'Token reduction' + +categories: [] +date: '2023-06-01' +lastmod: 2021-01-15T21:34:50Z +featured: false +draft: false + +# Featured image +# To use, add an image named `featured.jpg/png` to your page's folder. +# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight. +image: + caption: '' + focal_point: '' + preview_only: false + +# Projects (optional). +# Associate this post with one or more of your projects. +# Simply enter your project's folder or file name without extension. +# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`. +# Otherwise, set `projects = []`. +projects: [] +publishDate: '2021-01-15T21:34:50.388741Z' +publication_types: +# 1 Conference paper +# 2 Journal article +# 3 Preprint +# 4 Report +# 5 Book +# 6 Book section +# 7 Thesis +# 8 Patent +- '1' +abstract: In this paper, we introduce a set of simple yet effective TOken REduction (TORE) strategies for Transformer-based Human Mesh Recovery from monocular images. Current SOTA performance is achieved by Transformer-based structures. However, they suffer from high model complexity and computation cost caused by redundant tokens. We propose token reduction strategies based on two important aspects, i.e., the 3D geometry structure and 2D image feature, where we hierarchically recover the mesh geometry with priors from body structure and conduct token clustering to pass fewer but more discriminative image feature tokens to the Transformer. Our method massively reduces the number of tokens involved in high-complexity interactions in the Transformer. This leads to a significantly reduced computational cost while still achieving competitive or even higher accuracy in shape recovery. Extensive experiments across a wide range of benchmarks validate the superior effectiveness of the proposed method. We further demonstrate the generalizability of our method on hand mesh recovery. Our code will be publicly available once the paper is published. +publication: 'ICCV 2023' +---