Search Results for author: Xiaodong Lin

Found 12 papers, 8 papers with code

PAGE: Equilibrate Personalization and Generalization in Federated Learning

no code implementations13 Oct 2023 Qian Chen, Zilong Wang, Jiaqi Hu, Haonan Yan, Jianying Zhou, Xiaodong Lin

Federated learning (FL) is becoming a major driving force behind machine learning as a service, where customers (clients) collaboratively benefit from shared local updates under the orchestration of the service provider (server).

Federated Learning

Towards Language-guided Interactive 3D Generation: LLMs as Layout Interpreter with Generative Feedback

no code implementations25 May 2023 Yiqi Lin, Hao Wu, Ruichen Wang, Haonan Lu, Xiaodong Lin, Hui Xiong, Lin Wang

Generating and editing a 3D scene guided by natural language poses a challenge, primarily due to the complexity of specifying the positional relations and volumetric changes within the 3D space.

3D Generation

Compositional Text-to-Image Synthesis with Attention Map Control of Diffusion Models

1 code implementation23 May 2023 Ruichen Wang, Zekang Chen, Chen Chen, Jian Ma, Haonan Lu, Xiaodong Lin

Our approach produces a more semantically accurate synthesis by constraining the attention regions of each token in the prompt to the image.

Attribute Image Generation

Edit Everything: A Text-Guided Generative System for Images Editing

1 code implementation27 Apr 2023 Defeng Xie, Ruichen Wang, Jian Ma, Chen Chen, Haonan Lu, Dong Yang, Fobo Shi, Xiaodong Lin

We introduce a new generative system called Edit Everything, which can take image and text inputs and produce image outputs.

Class Attention Transfer Based Knowledge Distillation

1 code implementation CVPR 2023 Ziyao Guo, Haonan Yan, Hui Li, Xiaodong Lin

Previous knowledge distillation methods have shown their impressive performance on model compression tasks, however, it is hard to explain how the knowledge they transferred helps to improve the performance of the student network.

Knowledge Distillation Model Compression

GlyphDraw: Seamlessly Rendering Text with Intricate Spatial Structures in Text-to-Image Generation

3 code implementations31 Mar 2023 Jian Ma, Mingjun Zhao, Chen Chen, Ruichen Wang, Di Niu, Haonan Lu, Xiaodong Lin

Recent breakthroughs in the field of language-guided image generation have yielded impressive achievements, enabling the creation of high-quality and diverse images based on user instructions. Although the synthesis performance is fascinating, one significant limitation of current image generation models is their insufficient ability to generate text coherently within images, particularly for complex glyph structures like Chinese characters.

Optical Character Recognition (OCR) Text-to-Image Generation

CompoNeRF: Text-guided Multi-object Compositional NeRF with Editable 3D Scene Layout

no code implementations24 Mar 2023 Haotian Bai, Yuanhuiyi Lyu, Lutao Jiang, Sijia Li, Haonan Lu, Xiaodong Lin, Lin Wang

To tackle the issue of 'guidance collapse' and enhance consistency, we propose a novel framework, dubbed CompoNeRF, by integrating an editable 3D scene layout with object specific and scene-wide guidance mechanisms.

Object Text to 3D

Deepfake Detection: A Comprehensive Study from the Reliability Perspective

no code implementations20 Nov 2022 Tianyi Wang, Xin Liao, Kam Pui Chow, Xiaodong Lin, Yinglong Wang

In this survey, we provide a thorough review of the existing Deepfake detection studies from the reliability perspective.

DeepFake Detection Face Swapping

Learning to Walk with Dual Agents for Knowledge Graph Reasoning

1 code implementation23 Dec 2021 Denghui Zhang, Zixuan Yuan, Hao liu, Xiaodong Lin, Hui Xiong

Graph walking based on reinforcement learning (RL) has shown great success in navigating an agent to automatically complete various reasoning tasks over an incomplete knowledge graph (KG) by exploring multi-hop relational paths.

reinforcement-learning Reinforcement Learning (RL)

DensE: An Enhanced Non-commutative Representation for Knowledge Graph Embedding with Adaptive Semantic Hierarchy

1 code implementation11 Aug 2020 Haonan Lu, Hailin Hu, Xiaodong Lin

This design principle leads to several advantages of our method: (1) For composite relations, the corresponding diagonal relation matrices can be non-commutative, reflecting a predominant scenario in real world applications; (2) Our model preserves the natural interaction between relational operations and entity embeddings; (3) The scaling operation provides the modeling power for the intrinsic semantic hierarchical structure of entities; (4) The enhanced expressiveness of DensE is achieved with high computational efficiency in terms of both parameter size and training time; and (5) Modeling entities in Euclidean space instead of quaternion space keeps the direct geometrical interpretations of relational patterns.

Computational Efficiency Entity Embeddings +2

Cannot find the paper you are looking for? You can Submit a new open access paper.