AI for Content Creation Workshop

@ CVPR 2022

19th June 2022

Room 208-210 + Zoom



Summary

The AI for Content Creation (AI4CC) workshop at CVPR 2022 brings together researchers in computer vision, machine learning, and AI. Content creation is required for simulation and training data generation, media like photography and videography, virtual reality and gaming, art and design, and documents and advertising (to name just a few application domains). Recent progress in machine learning, deep learning, and AI techniques has allowed us to turn hours of manual, painstaking content creation work into minutes or seconds of automated or interactive work. For instance, generative adversarial networks (GANs) can produce photorealistic images of 2D and 3D items such as humans, landscapes, interior scenes, virtual environments, or even industrial designs. Neural networks can super-resolve and super-slomo videos, interpolate between photos with intermediate novel views and even extrapolate, and transfer styles to convincingly render and reinterpret content. In addition to creating awe-inspiring artistic images, these offer unique opportunities for generating additional and more diverse training data. Learned priors can also be combined with explicit appearance and geometric constraints, perceptual understanding, or even functional and semantic constraints of objects.

AI for content creation lies at the intersection of the graphics, the computer vision, and the design community. However, researchers and professionals in these fields may not be aware of its full potential and inner workings. As such, the workshop is comprised of two parts: techniques for content creation and applications for content creation. The workshop has three goals:

  1. To cover introductory concepts to help interested researchers from other fields start in this exciting area.
  2. To present success stories to show how deep learning can be used for content creation.
  3. To discuss pain points that designers face using content creation tools.

More broadly, we hope that the workshop will serve as a forum to discuss the latest topics in content creation and the challenges that vision and learning researchers can help solve.

Welcome! -
Deqing Sun (Google)
Huiwen Chang (Google)
Tali Dekel (Weizmann Institute)
Lu Jiang (Google)
Jing Liao (City University of Hong Kong)
Ming-Yu Liu (NVIDIA)
Cynthia Lu (Adobe)
Seungjun Nah (NVIDIA)
James Tompkin (Brown)
Ting-Chun Wang (NVIDIA)
Jun-Yan Zhu (Carnegie Mellon)




Awards

Best paper

Dongyeun Lee, Jae Young Lee, Doyeon Kim, Jaehyun Choi, Junmo Kim, Fix the Noise: Disentangling Source Feature for Transfer Learning of StyleGAN | Web+Code

Runner up
Yunseok Jang, Ruben Villegas, Jimei Yang, Duygu Ceylan, Xin Sun, Honglak Lee, RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects

Best poster

Remote
Yohan Poirier-Ginter, Alexandre, Ryan Smith, Jean-François LaLonde, Overparameterization Improves StyleGAN Inversion | Webpage

Physical
Ajay Jain, Ben Mildenhall, Jonathan T. Barron, Pieter Abbeel, Ben Poole, Zero-Shot Text-Guided Object Generation with Dream Fields | Webpage | PosterAlso published at CVPR 2022



Schedule — Video Recording

Click ▶ to jump to each talk!

Morning session:
Time CDT
08:50 Welcome, introductions, and best papers 👋
09:00 Ohad Fried (Reichman University)
09:25 Michael Black (MPI for Intelligent Systems)
09:50 Björn Ommer (University of Munich)
10:15 Coffee break
10:30 Elisa Ricci (University of Trento)
10:55 Duygu Ceylan (Adobe)
11:20 Olga Russakovsky (Princeton)
11:45 Poster session 1
Papers
01Efimova et al.Conditional Vector Graphics Generation for Music Cover ImagesWebpage
02Sun et al.SeCGAN: Parallel Conditional Generative Adversarial Networks for Face Editing via Semantic Consistency
03Jenni et al.Video-ReTime: Learning Temporally Varying Speediness for Time Remapping
04Lee et al.RewriteNet: Reliable Scene Text Editing with Implicit Decomposition of Text Contents and StylesWebpage
05Menéndez González et al.SaiNet: Inpainting behind objects using geometrically meaningful masks.
06Poirier-Ginter et al.Overparameterization Improves StyleGAN InversionWebpage
07Turgutlu et al.LayoutBERT: Masked Language Layout Model for Object Insertion
08Jang et al.RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects

Extended Abstracts
09Ganz et al.Improved Image Generation via Sparse Modeling
10Singh et al.On Conditioning the Input Noise for Controlled Image Generation with Diffusion Models
11Esser et al.Towards Unified Keyframe Propagation ModelsWebpage
12Haim et al.Diverse Video Generation from a Single Video

Accepted at Other Venues
13Zhang et al.Text-guided Image Manipulation based on Sentence-aware and Word-aware Network, ICME 2021Poster
14Ren et al.Neural Volumetric Object Selection, CVPR 2022Webpage
15Zhao et al.Rethinking Deep Face Restoration
16Mu et al.CoordGAN: Self-Supervised Dense Correspondences Emerge from GANs, CVPR 2022Webpage
17Cazenavette et al.Dataset Distillation by Matching Training Trajectories, CVPR 2022Webpage
18Xue et al.GIRAFFE HD: A High-Resolution 3D-aware Generative Model, CVPR 2022Webpage
19Parmar et al.Multilayer GAN Inversion and Editing, CVPR 2022
20Jain et al.Zero-Shot Text-Guided Object Generation with Dream Fields, CVPR 2022Webpage
12:45 Lunch break 🥪


Afternoon session:
Time CDT
13:45 Oral session
14:00 Aditya Ramesh (Open AI DALL·E 2)
14:25 Coffee break
14:45 David Bau (Northeastern University)
15:10 Adriana Schulz (University of Washington)
15:35 Xun Huang (NVIDIA)
16:00 Coffee break
16:10 Richard Zhang (Adobe)
16:35 Chitwan Saharia (Google Imagen)
17:00 Poster session 2
Papers
01Haas et al.Tensor-based Emotion Editing in the StyleGAN Latent Space
02Kim et al.Cross-Domain Style Mixing for Face CartoonizationWebpage
03Park et al.Monocular Human Digitization via Implicit Re-projection Networks
04Sun et al.End-to-End Rubbing Restoration Using Generative Adversarial Networks

Extended Abstracts
05Lee et al.Fix the Noise: Disentangling Source Feature for Transfer Learning of StyleGANWebpage
06He et al.LatentKeypointGAN: Controlling Images via Latent KeypointsWebpage
07Nazarovs et al.Image2Gif: Generating Continuous Realistic Animations with Warping NODEs
08Hassan et al.FontNet: Closing the gap to font designer performance in font synthesis
09Kwak et al.Generate and Edit Your Own Character in a Canonical View
10Lee et al.StyLandGAN: A StyleGAN based Landscape Image Synthesis using Depth-map

Accepted at Other Venues
11Ntavelis et al.Arbitrary-Scale Image Synthesis, CVPR 2022Webpage
12Or-El et al.StyleSDF: High-Resolution 3D-Consistent Image and Geometry Generation, CVPR 2022Webpage
13Tang et al.Few-Shot Font Generation by Learning Fine-Grained Local Styles, CVPR 2022
14Shu et al.Few-Shot Head Swapping in the Wild, CVPR 2022
15Mao et al.Discrete Representations Strengthen Vision Transformer Robustness, ICLR 2022
16Yu et al.Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks, ICLR 2022Webpage
17Rodriguez-Pardo et al.SeamlessGAN: Self-Supervised Synthesis of Tileable Texture Maps, TVCG 2022Webpage
18Su et al.A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose, NeurIPS 2021
19Keshari et al.V3GAN: Decomposing Background, Foreground and Motion for Video Generation, BMVC 2021

Previous Workshops