Demystifying CLIP Data

facebookresearch/metaclip 28 Sep 2023

We believe that the main ingredient to the success of CLIP is its data and not the model architecture or pre-training objective.

3.17 stars / hour

ProPainter: Improving Propagation and Transformer for Video Inpainting

sczhou/propainter ICCV 2023

We also propose a mask-guided sparse video Transformer, which achieves high efficiency by discarding unnecessary and redundant tokens.

Optical Flow Estimation Video Inpainting

3.53 stars / hour

Text-to-3D using Gaussian Splatting

gsgen3d/gsgen 28 Sep 2023

In this stage, we increase the number of Gaussians by compactness-based densification to enhance continuity and improve fidelity.

Text to 3D

2.93 stars / hour

LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models

dvlab-research/longlora 21 Sep 2023

LongLoRA adopts LLaMA2 7B from 4k context to 100k, or LLaMA2 70B to 32k on a single 8x A100 machine.

1.26 stars / hour

Communicative Agents for Software Development

openbmb/chatdev 16 Jul 2023

At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting.

Decision Making

1.47 stars / hour

Qwen Technical Report

QwenLM/Qwen-7B 28 Sep 2023

Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans.

Language Modelling Large Language Model

1.25 stars / hour

InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition

internlm/internlm-xcomposer 26 Sep 2023

We propose InternLM-XComposer, a vision-language large model that enables advanced image-text comprehension and composition.

Image Comprehension Reading Comprehension

1.02 stars / hour

NExT-GPT: Any-to-Any Multimodal LLM

NExT-GPT/NExT-GPT 11 Sep 2023

While recently Multimodal Large Language Models (MM-LLMs) have made exciting strides, they mostly fall prey to the limitation of only input-side multimodal understanding, without the ability to produce content in multiple modalities.

1.05 stars / hour

Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation

showlab/show-1 27 Sep 2023

In this paper, we are the first to propose a hybrid model, dubbed as Show-1, which marries pixel-based and latent-based VDMs for text-to-video generation.

Text-to-Video Generation Video Alignment +1

0.81 stars / hour

Deep Geometrized Cartoon Line Inbetweening

lisiyao21/animeinbet ICCV 2023

To preserve the precision and detail of the line drawings, we propose a new approach, AnimeInbet, which geometrizes raster line drawings into graphs of endpoints and reframes the inbetweening task as a graph fusion problem with vertex repositioning.

1.64 stars / hour