Changqing Fu

Profile Pic 

Research Interests: I'm interested in visual generative models and a broad range of topics in deep learning. Currently I'm working on the improvement of diffusion models for better parameter/training/inference efficiency and with more effective control, using the approaches from geometric deep learning.

I'm a senior-year PhD student in CEREMADE, University of Paris IX Dauphine - PSL, fortunately advised by Prof. Laurent D. Cohen. Before that I was a graduate teaching assistant in University at Albany - SUNY advised by Prof. Yiming Ying. I recieved a Master's degree in Applied and Theoretical Mathematics from Paris IX - PSL advised by Olga Mula and Robin Ryder, and a Bachelor's degree from School of Mathematical Sciences in Fudan University fortunately advised by Prof. Lei Shi.

Here are my Institutional Page, CV, GitHub, Twitter and Linkedin.

Email: cfu_at_ceremade_dot_dauphine_dot_fr, evergreencqfu_at_gmail_dot_com

Projects

DeepPrism 

DeepPrism. An unprecedented 1000x better parameter-efficient network design for generative models, by leveraging an equivariance on the channel dimension, where training and inference FLOPs are also greatly reduced. This design applies equally to attention layers and maintains generation performance for a variety of models such as VAE, StableDiffusion (LDM).

Tip of the Iceberg 

Tip of the Iceberg. An improvement on the widely-used perceptual loss, to achieve 10x more training efficieny for generative models, by calculating non-singular symmetric features instead of activations in the path-integral along layers.

Unifying Activation and Attention 

Unifying Activation and Attention. A projective-geometric point of view to unify different operators in neural networks, under which the self-attention function intrinsically coincides with the activation-convolution layers by defining proper kernel functions. This connection does not change the form of commonly-used neural networks, can be visually confirmed, and leads to a unified view of the dynamics of neural processes.

Conic Linear Unit 

Conic Linear Unit. A non-pointwise activation function with unprecedented infinite-order symmetry group, which improves generation quality, and enables alignment of neural networks with different widths. Application involves more flexible Federated Learning using Optimal Transport.

Geometric Deformation on Objects 

Geometric Deformation on Objects. A robust unsupervised two-stage image manipulation model by modifying contour constraints. The use of contour constraints is inspired by the sparsity of the edited constraint, while the coarse-grained post-processing single-image GAN alleviates distortion caused by out-of-distribution inputs.

Publications

Talks and Workshops