Skip to content

Automated summary of HuggingFace's Daily AI papers. All credits go to the HuggingFace community for curating this, and the research community for generously sharing.

License

Notifications You must be signed in to change notification settings

14lee/daily-ai-paper

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Daily AI Papers

This summary is automated the summary of HuggingFace's Daily Papers, using Gemini and GitHub actions.

Last updated: 2024-09-04

Papers for 2024-09-04

Title Authors Summary Link
Kvasir-VQA: A Text-Image Pair GI Tract Dataset Andrea Storås, vlbthambawita, stevenah, cise-midoglu, SushantGautam The paper introduces Kvasir-VQA, an extended dataset derived from HyperKvasir and Kvasir-Instrument datasets, augmented with question-and-answer annotations to facilitate advanced machine learning tasks in GI diagnostics. The dataset comprises 6,500 annotated images spanning various GI tract conditions and surgical instruments, and it supports multiple question types including yes/no, choice, location, and numerical count. Preliminary experiments demonstrate the dataset's effectiveness in training models for image captioning, VQA, and synthetic image generation. The dataset is designed to bridge the gap between medical image analysis and practical diagnostic tools, ultimately aiming to improve patient outcomes and diagnostic precision. This dataset can be of immense value to AI engineers and data scientists looking to develop robust and accurate AI models for medical image analysis and diagnostics in the GI tract. Read more
OLMoE: Open Mixture-of-Experts Language Models sewon, jacobmorrison, dirkgr, soldni, Muennighoff The paper introduces OLMOE, a fully open-source, state-of-the-art Mixture-of-Experts (MoE) language model. This model outperforms other available models with similar active parameters, even surpassing larger models like Llama2-13B-Chat and DeepSeekMoE-16B. The authors present a comprehensive analysis of MoE training and routing, demonstrating how it achieves high specialization and outperforms dense language models on various benchmarks. All aspects of OLMOE are open-sourced, including model weights, training data, code, and logs. This work is highly relevant to practitioners by providing a cost-effective, open-source, high-performing language model for research and development. Moreover, the detailed analysis of MoE design choices provides valuable insights for AI engineers and data scientists working with MoE models. Read more
LongRecipe: Recipe for Efficient Long Context Generalization in Large Languge Models Laziobird, anhtuanluu36, sheryc, yuliang03181, zhiyuanhucs This research paper proposes LongRecipe, an efficient training strategy for extending the context window of Large Language Models (LLMs). LongRecipe leverages a novel approach called Impactful Token Analysis to identify key tokens that significantly influence long-text training, enabling the model to learn from shorter text segments while maintaining training efficiency. It also introduces a Position Index Transformation technique to simulate long sequences without needing actual long texts. LongRecipe achieves significant improvements in long-context generalization, demonstrating that it can effectively utilize long sequences while requiring only 30% of the target context window size and reducing computational training resources by over 85% compared to full-sequence training. Moreover, LongRecipe preserves the original LLM's capabilities in general tasks, making it a balanced approach for enhancing both long-range dependency understanding and foundational model performance. This research contributes to the field of AI by offering practitioners a more efficient and effective method for extending the context window of LLMs, enabling them to handle more complex and challenging tasks that require long-context understanding. Read more
FLUX that Plays Music huangjunshi, Changqian, MichaelFan, onion This paper proposes FluxMusic, an extension of diffusion-based rectified flow Transformers for text-to-music generation. It leverages a latent VAE space of mel-spectrograms, incorporating double and single stream blocks to model text and music. The authors demonstrate that FluxMusic outperforms existing methods across multiple metrics, including FAD, IS, and CLAP, demonstrating its scalability and effectiveness. Furthermore, the authors evaluate the impact of model size, rectified flow training, and other hyperparameters on the generative performance. FluxMusic provides a promising avenue for researchers and practitioners in text-to-music generation, offering improved accuracy and scalability compared to previous approaches. Read more
DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos vinthony, walkingshadow, Xiaoyu521, xiangjun0211, wbhu-tc DepthCrafter, a novel video-depth estimation method, generates temporally consistent long depth sequences for open-world videos using video diffusion models. Unlike previous approaches, it does not require additional information, such as camera poses or optical flow. DepthCrafter achieves this by training a video-to-depth model from a pre-trained image-to-video diffusion model through a three-stage training strategy. The method is evaluated on multiple datasets, outperforming existing approaches in terms of both quantitative and qualitative metrics, demonstrating its effectiveness in generating high-quality depth sequences. Practitioners, such as AI engineers and data scientists, can leverage DepthCrafter for various downstream applications, including depth-based visual effects and conditional video generation. Read more
VideoLLaMB: Long-context Video Understanding with Recurrent Memory Bridges Yang Liu, zlzheng, cihangxie, ColorfulAI VideoLLaMB is a new framework that utilizes recurrent memory tokens within bridge layers to encode the entirety of a video sequence, preserving semantic continuity and improving performance across various tasks. The authors introduce a SceneTilling algorithm, which segments videos into independent semantic units. This approach achieves state-of-the-art results across various video QA benchmarks, particularly on longer videos (up to 8x longer) and in the Needle in a Video Haystack (NIAVH) benchmark. VideoLLaMB also enables training-free streaming video captioning and high performance on a single GPU, setting a new foundation for long-form video understanding models. These improvements are particularly relevant to AI practitioners, as they offer a more efficient and effective way to analyze and understand long videos. Read more
Diffusion Policy Policy Optimization Lars L. Ankile, Allen Z. Ren, daihongkai, pulkitag, jlidard The research paper "Diffusion Policy Policy Optimization" explores a novel algorithm for fine-tuning diffusion-based policies in robot learning tasks using policy gradient methods. The authors demonstrate that their algorithm, DPPO, outperforms existing methods for diffusion-based policy fine-tuning and achieves strong results in both simulation and real-world robot manipulation tasks. The paper also provides insights into the mechanisms behind DPPO's success, highlighting its ability to induce structured exploration, maintain training stability, and enhance policy robustness. DPPO could be relevant to practitioners developing robotic systems by providing a robust and efficient method for fine-tuning diffusion-based policies trained on expert demonstrations. Read more
Compositional 3D-aware Video Generation with LLM Director Anni Tang, bianjiang, leo-guo, deeptimhe, ingzzzz The paper proposes a novel method for text-to-video generation by explicitly composing concepts in 3D space. The method leverages LLMs to decompose a complex textual prompt into sub-prompts, each describing a specific concept. It then generates 3D representations for each concept using pre-trained expert models. These representations are then composed using priors from multi-modal LLMs and 2D diffusion models. The key results of this method include the generation of high-fidelity videos with diverse motions and the ability to control individual concepts. This research could be relevant to AI engineers and data scientists working on text-to-video generation or who are interested in applying LLMs to 3D graphics or video generation. Read more
LinFusion: 1 GPU, 1 Minute, 16K Image Xinchao Wang, ZhenXiong, whyu, Huage001 This research paper presents LinFusion, a novel diffusion model for text-to-image generation that achieves linear time and memory complexity with respect to the number of spatial tokens. The authors achieve this by introducing a generalized linear attention mechanism that serves as a low-rank approximation of popular linear token mixers. Extensive experiments on Stable Diffusion models demonstrate that LinFusion achieves performance on par with or superior to the original SD after only modest training, while significantly reducing training time and memory complexity. LinFusion is highly compatible with pre-trained SD components and can generate high-resolution images like 16K resolution. AI practitioners can leverage this novel model to generate high-resolution images with significantly reduced computational resources. Read more
ContextCite: Attributing Model Generation to Context Aleksander Madry, krisgrg, harshay, bencw This research paper introduces the novel task of context attribution, aiming to identify the specific parts of a context responsible for a language model's generated statement. The paper proposes a scalable and efficient method called CONTEXTCITE, which uses a linear surrogate model to estimate the effect of ablating different parts of the context. The results demonstrate that CONTEXTCITE consistently outperforms existing baselines in identifying relevant sources, particularly for complex tasks like multi-hop question answering and summarization. CONTEXTCITE can be applied by practitioners to verify generated statements, improve response quality by pruning irrelevant context, and detect poisoning attacks in language models. Read more
OD-VAE: An Omni-dimensional Video Compressor for Improving Latent Video Diffusion Model Qian Wang, Bin Zhu, Bin Lin, Zongjian Li, Liuhan Chen This research proposes an omni-dimensional video compressor (OD-VAE) to improve the efficiency of latent video diffusion models (LVDMs). Unlike conventional VAEs, OD-VAE compresses videos temporally and spatially, leading to more concise latent representations and reduced computational requirements for LVDMs. The researchers demonstrate that OD-VAE can achieve high video reconstruction accuracy while maintaining high compression speed, improving the training efficiency of LVDMs. The results also suggest that OD-VAE can be used to generate longer videos with limited GPU memory, making it a valuable tool for practitioners working with LVDMs. The paper's findings have implications for AI engineers and data scientists developing video generation models, offering a way to improve model efficiency and reduce computational costs. Read more
GenAgent: Build Collaborative AI Systems with Automated Workflow Generation -- Case Studies on ComfyUI Lei Bai, Wanli Ouyang, Di Huang, Xiangyuan Xue, whlzy This research presents GenAgent, a novel LLM-based framework for automating the creation of complex workflows used in collaborative AI systems. The framework utilizes LLMs to represent workflows as code, enabling greater flexibility and scalability compared to monolithic AI models. GenAgent is evaluated on the ComfyUI platform and demonstrates superior performance to baseline methods in generating both run-level and task-level workflows. The key takeaway for practitioners is that GenAgent's ability to automate workflow generation can significantly improve the efficiency and effectiveness of collaborative AI system development. The framework can be applied to a variety of AI systems and platforms, making it a valuable tool for AI engineers and data scientists. Read more
Follow-Your-Canvas: Higher-Resolution Video Outpainting with Extensive Content Generation Junkun Yuan, Hongfa Wang, Yue Ma, Qihua Chen, cqf This research paper presents "Follow-Your-Canvas", a new method for higher-resolution video outpainting with extensive content generation. The proposed method addresses the limitations of existing video outpainting methods by using a diffusion-based model and dividing the task across spatial windows. By incorporating relative region embedding and a layout encoder, the authors demonstrate that Follow-Your-Canvas can generate high-quality results with improved spatial-temporal consistency. The model significantly outperforms existing methods in both low-resolution and high-resolution scenarios. AI engineers can use this method for a wide range of applications such as improving user experience by generating videos with larger aspect ratios or enhancing the resolution of existing videos. Read more
Density Adaptive Attention-based Speech Network: Enhancing Feature Understanding for Mental Health Disorders Adrian Kieback, Georgios Ioannides, jsbai-aaron, amanchadha This research introduces DAAMAudioCNNLSTM and DAAMAudioTransformer, two parameter-efficient and explainable models for audio feature extraction and depression detection. These models leverage the multi-head Density Adaptive Attention Mechanism (DAAM) to dynamically focus on informative speech segments, achieving state-of-the-art performance on the DAIC-WOZ dataset (F1 macro scores of 0.702 and 0.72, respectively). DAAM offers significant explainability benefits by highlighting which features were most informative for diagnosis, making it more transparent and trustworthy. This work could be valuable for practitioners by providing tools for developing more reliable, clinically-useful depression detection models that leverage only audio signals, without relying on supplementary information. Read more
Know When to Fuse: Investigating Non-English Hybrid Retrieval in the Legal Domain Gerasimos Spanakis, Gijs van Dijck, antoinelouis This paper investigates the performance of hybrid retrieval methods in the legal domain, specifically in the French language. The authors find that fusing domain-general retrieval models consistently improves performance in zero-shot settings, but in-domain training diminishes the benefits of fusion, suggesting a trade-off between computational resources and accuracy. They also propose a percentile-based score normalization method to address misaligned score distributions across different models, which can improve the effectiveness of fusion. The study highlights the importance of carefully considering the choice of retrieval models and fusion techniques in specialized domains, and provides insights that could be valuable for practitioners working on information retrieval in non-English legal domains. Read more
The MERIT Dataset: Modelling and Efficiently Rendering Interpretable Transcripts J. Boal, A. Sanchez-Cuadrado, alvlopez, de-Rodrigo This research introduces the MERIT Dataset, a multimodal (text, image, and layout) dataset of school reports designed for training visually-rich document understanding (VrDU) models. The dataset, comprising over 400 labels and 33k samples, includes realistic digital and photorealistic documents with controlled bias features (such as gender and name origin), enabling the study of bias in language models. The dataset is publicly available and includes a comprehensive generation pipeline for replication. The authors conduct experiments using state-of-the-art LayoutLM models, demonstrating the dataset's suitability for training and evaluating performance, while showcasing the challenges associated with real-world scenarios. This dataset offers a valuable tool for practitioners in AI engineering and data science, providing a benchmark for developing and evaluating models, especially in the context of bias detection and understanding. Read more

Papers for 2024-09-03

Title Authors Summary Link
VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters Xiaoyun Joy Wang, Zhuo Li, twinsken, HALF111, chenmouxiang This paper introduces VisionTS, a novel zero-shot time series forecasting model that leverages the intrinsic similarities between images and time series. The authors reformulate the forecasting task as an image reconstruction problem, and utilize a pre-trained visual masked autoencoder (MAE) to forecast future time series values without any specific training on time series data. VisionTS achieves comparable or even superior performance to existing text-based and time-series based foundation models in the zero-shot setting, suggesting that visual models could be a free lunch for time series forecasting. This work provides a novel approach for practitioners to build time series forecasting foundation models, particularly in situations where data scarcity or heterogeneity is a challenge. Read more
Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming Zhifei Xie, gpt-omni The paper proposes Mini-Omni, an open-source, end-to-end multi-modal large language model (LLM) with real-time speech interaction capabilities. Mini-Omni enables direct audio reasoning via text-instructed speech generation, which utilizes a novel parallel decoding strategy to boost inference speed. The authors introduce the "Any Model Can Talk" framework, which helps to transfer text capabilities of pre-trained models to speech output with minimal degradation, making it valuable for practitioners in the field. They also introduce the VoiceAssistant-400K dataset, specifically designed for speech-output models. Mini-Omni is a significant advancement in human-computer interaction, offering valuable potential for future research. Read more

Papers for 2024-09-02

Title Authors Summary Link
SciLitLLM: How to Adapt LLMs for Scientific Literature Understanding xumingjun, caixc97, yrshi, Jesse-zjx, Sihangli This research paper presents SciLitLLM, a specialized large language model (LLM) designed for scientific literature understanding. The model utilizes a hybrid training strategy that combines continual pre-training (CPT) on high-quality scientific corpora and supervised fine-tuning (SFT) with diverse scientific instructions. To address the challenges of constructing high-quality CPT corpora and generating diverse SFT instructions, the authors propose a meticulous pipeline that includes PDF text extraction, content error correction, and quality filtering for CPT. For SFT, they introduce a novel LLM-based instruction synthesis method to generate diverse instructions. SciLitLLM demonstrates promising performance on scientific literature understanding benchmarks, outperforming existing LLMs across various tasks, especially in domains like fundamental science and organic materials. These findings are particularly relevant to AI engineers and data scientists involved in developing LLMs for specialized domains, highlighting the potential of combining CPT and SFT for knowledge injection and instruction-following enhancements. Read more
CoRe: Context-Regularized Text Embedding Learning for Text-to-Image Personalization Jian Yin, BlurBlur, Zhangjunyi, darkcser, FeizeWu The research paper, CoRe: Context-Regularized Text Embedding Learning for Text-to-Image Personalization, tackles the challenge of balancing identity preservation and text alignment in text-to-image personalization. It introduces a novel method, Context Regularization (CoRe), which improves text embedding learning by regularizing the context tokens surrounding the new concept. CoRe enhances the compatibility of the new concept's text embedding and facilitates a more precise semantic understanding of the prompt. The authors demonstrate that CoRe outperforms several baselines in both identity preservation and text alignment, especially for prompts requiring high visual variability. This research provides valuable insights for practitioners in the field of text-to-image personalization, enabling the generation of high-quality, text-aligned images with improved identity preservation. Read more
The VoxCeleb Speaker Recognition Challenge: A Retrospective dgromero, jungjee, arsha1, joonson, JaesungHuh The VoxCeleb Speaker Recognition Challenge (VoxSRC) is a series of annual challenges and workshops that ran from 2019 to 2023. This paper is a retrospective analysis of the VoxSRC challenge, covering the challenges’ goals, dataset creation, evaluation metrics, and the progression of research techniques. Key results highlight that the state-of-the-art has steadily improved over the years, with the use of self-supervised pretrained models significantly advancing performance. The paper also provides valuable insights and recommendations for future challenge organizers, such as maintaining a consistent test set, incorporating individual and ensemble model performance, and including a more diverse dataset. Practitioners, particularly those involved in speaker recognition and diarization, will find this retrospective analysis a valuable resource for understanding the evolution of research techniques and identifying future directions in the field. Read more
CURLoRA: Stable LLM Continual Fine-Tuning and Catastrophic Forgetting Mitigation mnoorfawi The paper introduces CURLoRA, a novel approach to fine-tuning LLMs that leverages CUR matrix decomposition to mitigate catastrophic forgetting and improve computational efficiency. By leveraging inverted probabilities in CUR decomposition, the method effectively limits the growth of trainable parameters, resulting in improved stability and performance across tasks while significantly reducing the number of trainable parameters. This method is particularly useful in continual learning scenarios, where LLMs are trained on a sequence of tasks and need to preserve knowledge from previous tasks. The paper shows that CURLoRA outperforms standard LoRA in mitigating catastrophic forgetting, and demonstrates the effectiveness of this approach across a range of tasks and datasets. This research offers practical solutions for AI engineers and data scientists who are seeking to develop and deploy LLMs in real-world settings, where catastrophic forgetting poses a significant challenge. Read more
Jina-ColBERT-v2: A General-Purpose Multilingual Late Interaction Retriever hanxiao, makram93, jupyterjazz, michael-guenther, bwang0911 The paper introduces Jina-ColBERT-v2, a novel multilingual dense retriever based on the ColBERT architecture. It presents various improvements to the model architecture and training pipeline, including the adoption of a modified XLM-ROBERTa encoder, pair training with weakly supervised datasets, and triplet training with high-quality multilingual data. Jina-ColBERT-v2 significantly improves performance across a range of English and multilingual retrieval tasks while reducing storage requirements by up to 50%. The authors also highlight the model's robust performance in low-resource languages, making it suitable for practitioners working on multilingual information retrieval tasks. Read more
SurveySum: A Dataset for Summarizing Multiple Scientific Articles into a Survey Section Rodrigo Nogueira, Thales Sales Almeida, thiagolaitz, gubartz, carisio The research paper introduces a novel dataset called "SurveySum" for summarizing multiple scientific articles into a section of a survey. The authors propose two pipelines for summarizing scientific articles into a survey section, which are evaluated using various metrics. The results of the evaluation highlight the importance of high-quality retrieval stages and the impact of different model configurations on the quality of generated summaries. The paper addresses the lack of domain-specific datasets for summarization, which is crucial for building accurate and robust summarization models. This work provides a valuable resource for researchers and practitioners working in the field of natural language processing, particularly those involved in the development and evaluation of summarization models. Read more
Automatic Differential Diagnosis using Transformer-Based Multi-Label Sequence Classification Lubaba Binte Saber, Mohammad Ashrafuzzaman Khan, AdnanSadi This research paper explores the use of transformer-based multi-label sequence classification for automated differential diagnosis. The authors propose a method to process tabular patient data into text reports and introduce two data modification modules to improve the robustness of the model. Their experiments using four transformer models demonstrate promising results with over 97% F1 scores and highlight the model's capability to generalize to challenging scenarios. The results suggest that this approach could be a valuable tool for healthcare professionals seeking to identify and prioritize potential diagnoses for patients, especially when dealing with ambiguous symptoms. This research emphasizes the potential of AI-driven tools to assist with complex medical tasks, particularly for practitioners who may need assistance in identifying a wider range of possible diagnoses. Read more
UrBench: A Comprehensive Benchmark for Evaluating Large Multimodal Models in Multi-View Urban Scenarios Tianyi Bai, Junyan Ye, Dairong Chen, Haote Yang, Baichuan Zhou This research paper introduces UrBench, a comprehensive benchmark for evaluating Large Multimodal Models (LMMs) in complex, multi-view urban scenarios. The benchmark includes 11.6K questions covering 14 distinct tasks across four evaluation dimensions, namely Geo-Localization, Scene Reasoning, Scene Understanding, and Object Understanding. UrBench utilizes a novel cross-view detection-matching algorithm to create high-quality annotations and question generation pipeline that incorporates LMM-based, rule-based, and human-based methods. The authors evaluate 21 LMMs on UrBench and find that current models struggle with multi-view understanding, inconsistent behavior across different views, and fall behind human performance in most tasks, highlighting the significant room for improvement in current models' abilities for human-centric AI applications in urban settings. The paper's findings are relevant to AI practitioners working on LMM development, as it provides valuable insights into the limitations and potential of current models, and serves as a benchmark for future research. Read more
InkubaLM: A small language model for low-resource African languages EricPeter, Jenalea, JessicaOjo, bonadossou, Atnafu The research paper introduces InkubaLM, a 0.4-billion parameter, multilingual language model designed specifically for low-resource African languages. The model demonstrably outperforms larger language models on specific tasks, notably sentiment analysis in Swahili. The authors release the model and datasets to encourage further research and development in the field. By bridging the language gap and offering an accessible tool, the paper highlights the potential for InkubaLM to be used by AI engineers and data scientists in tasks requiring local language understanding, such as machine translation and sentiment analysis. Read more
Large-Scale Multi-omic Biosequence Transformers for Modeling Peptide-Nucleotide Interactions Eric Oermann, Shivanand P. Lad, Robert J. Steele, Beakal, WeiHua The authors of this paper, Eric Oermann, Shivanand P. Lad, Robert J. Steele, and Beakal, propose a new method for learning joint representations of protein and nucleotide sequences using a multi-omic transformer architecture. They demonstrate that their model, OmniBioTE, achieves state-of-the-art performance on a variety of tasks related to protein-nucleotide interactions, such as predicting binding affinity and the effects of mutations. They also show that the model can be effectively fine-tuned for single-omics tasks, highlighting its potential for a wider range of applications. This research is relevant to AI engineers, data scientists, and bioinformaticians working in the field of biosequence analysis as it provides a powerful tool for understanding and modeling complex interactions between proteins and nucleic acids. Read more
VLM4Bio: A Benchmark Dataset to Evaluate Pretrained Vision-Language Models for Trait Discovery from Biological Images abhilashneog, harishB97, ksmehrab, arkadaw9, sammarfy This paper introduces VLM4Bio, a new benchmark dataset that evaluates the zero-shot performance of vision-language models (VLMs) for the task of trait discovery from biological images. VLM4Bio includes ≈469K question-answer pairs based on 30k images of three taxonomic groups: fishes, birds, and butterflies. The paper finds that while VLMs perform well on some tasks (e.g., trait identification), they struggle with other tasks (e.g., counting traits, localizing traits), highlighting the need for further research in this area. The findings of this paper will be useful for AI engineers and data scientists who are developing VLMs for organismal biology applications. The dataset can be used to train and evaluate VLMs for a variety of tasks, including species classification, trait identification, and trait grounding. It also provides insights into the limitations of current VLMs, which can help to guide future research efforts. Read more
ClimDetect: A Benchmark Dataset for Climate Change Detection and Attribution vasudevlal, matthewlyleolson, musashihinck, anahita-b, sungduk The paper introduces ClimDetect, a benchmark dataset for climate change detection and attribution (D&A) that leverages daily snapshots of climate model simulations for training and evaluating machine learning (ML) models. The dataset standardizes input and target variables, promoting consistency and comparability across studies. The authors demonstrate the applicability of Vision Transformers (ViTs) for climate fingerprinting, a novel approach in this domain. ClimDetect is publicly accessible and provides a benchmark for advancing climate science by improving model evaluations. Practitioners, such as AI Engineers and Data Scientists working in climate modeling, can use ClimDetect to enhance their D&A research efforts and develop robust ML models for understanding and mitigating climate change. Read more

Papers for 2024-08-30

Title Authors Summary Link
Law of Vision Representation in MLLMs chenfengx, WaterInSea, Ye27, Borise, shijiay The research paper titled "Law of Vision Representation in MLLMs" proposes a novel theory that links the performance of multimodal large language models (MLLMs) to the combination of cross-modal alignment and correspondence in vision representation. The authors establish a linear correlation between a proposed alignment and correspondence score (AC score) and the MLLM's performance across eight benchmarks. Through this correlation, they propose an "AC policy" to efficiently determine the optimal vision representation, leading to a 99.7% reduction in computational cost compared to traditional methods. The findings are significant for practitioners in AI, particularly data scientists and AI engineers, as they provide an efficient method for selecting the optimal vision representation for MLLMs, thereby streamlining the development process and reducing computational resources. Read more
CogVLM2: Visual Language Models for Image and Video Understanding ShiyuHuang, LiquidAmmonia, qingsonglv, iyuge2, wenyi The paper introduces CogVLM2, a new family of visual language models (VLMs) for image and video understanding. The authors introduce an improved training recipe based on the visual expert architecture and a high-resolution cross-module, achieving state-of-the-art results on several benchmarks. CogVLM2 family incorporates temporal grounding, a technique for automatically generating video annotations with timestamps, allowing for more precise and detailed understanding of video content. CogVLM2 family represents a significant advancement in visual and language modalities, offering powerful tools for both research and practical applications such as AI engineers, data scientists and researchers. Read more
WavTokenizer: an Efficient Acoustic Discrete Codec Tokenizer for Audio Language Modeling jlking, MingHuiFang, Exgc, ziyue, novateur The research paper "WavTokenizer: an Efficient Acoustic Discrete Codec Tokenizer for Audio Language Modeling" introduces a novel codec model designed to effectively compress audio signals into a low-dimensional discrete representation. Notably, WavTokenizer achieves a significantly compressed representation of one-second audio with only 75 tokens while maintaining superior subjective reconstruction quality compared to existing acoustic codec models. Moreover, WavTokenizer surpasses state-of-the-art performance in semantic tasks on the ARCH benchmark, highlighting its capability to capture richer semantic information. This work opens a new avenue for effectively compressing audio into a discrete representation, thereby enabling the use of audio data with larger language models. Practitioners, including AI engineers and data scientists, may leverage the presented approach to compress audio data for various applications, such as text-to-speech synthesis, audio generation, and cross-modal retrieval. Read more
ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model duanyueqi, yejunliang23, yikaiw, wenqsun, Liuff23 This research paper proposes a novel 3D scene reconstruction paradigm called ReconX that utilizes the generative power of video diffusion models to generate more observations from limited sparse views. This allows for higher quality reconstructions, especially in areas not seen in the original input. ReconX utilizes 3D structure guidance and a confidence-aware optimization scheme within the 3D Gaussian Splatting framework to ensure 3D consistency and minimize visual artifacts. Experimental results show that ReconX outperforms existing state-of-the-art methods in terms of both quality and generalizability. This work is particularly relevant for practitioners working in computer vision, especially those who deal with sparse-view 3D reconstruction tasks. The ability to reconstruct high-quality 3D models from a limited number of views could be valuable for applications such as autonomous navigation, virtual reality, and 3D modeling. Read more
SAM2Point: Segment Any 3D as Videos in Zero-shot and Promptable Manners Chengzhuo Tong, Xiangyang Zhu, Renrui Zhang, Chunyuan24, ZiyuG This research paper introduces SAM2Point, a novel framework that adapts the Segment Anything Model 2 (SAM 2) for 3D segmentation. The method efficiently converts 3D data into a series of multi-directional videos, enabling SAM 2 to perform zero-shot segmentation without requiring any 2D-3D projection or additional training. SAM2Point supports various prompt types (e.g., 3D point, box, and mask) and demonstrates robust generalization across diverse 3D scenarios (e.g., 3D objects, indoor scenes, outdoor scenes, and raw LiDAR). This approach is particularly relevant for practitioners as it provides an efficient and highly generalizable way to perform 3D segmentation using a pre-trained model, effectively mitigating the data scarcity issue prevalent in 3D domains. Read more
CSGO: Content-Style Composition in Text-to-Image Generation hobbyaih, NOVAglow646, syp115, wanghaofan, xingpng The paper presents CSGO, a novel content-style-stylized image generation framework that utilizes a large-scale dataset, IMAGStyle, to achieve high-quality results in both image-driven and text-driven style transfer. CSGO is trained end-to-end, enabling zero-shot arbitrary style transfer through decoupled content and style feature injection. The key contributions of this work include: (1) a dataset construction pipeline that generates and automatically cleanses stylized data triplets; (2) a unified CSGO framework that leverages independent feature injection modules for content and style features; and (3) a Content Alignment Score (CAS) metric to evaluate the content preservation capabilities of the generated image. This paper is relevant to AI engineers and data scientists working on style transfer, as it offers a robust and efficient framework that can be readily implemented for various applications, such as image editing, art creation, and design. Read more
Physics of Language Models: Part 2.2, How to Learn From Mistakes on Grade-School Math Problems Zeyuan Allen-Zhu, Yuanzhi Li, Zicheng Xu, Tian Ye The paper investigates whether language models can learn to correct their reasoning mistakes during generation by incorporating “retry data” into the training process. The authors find that training on data that contains erroneous steps immediately followed by their corrections significantly improves the reasoning accuracy of the language model, compared to training on error-free data. They also demonstrate that this approach does not require any modifications to the training process, such as label masking, and that it can be used effectively in conjunction with pre-trained models. These findings suggest that practitioners can directly benefit from incorporating retry data into the training of language models, particularly for tasks that require accurate and robust reasoning. Read more
3D Reconstruction with Spatial Memory Lourdes Agapito, HengyiWang This research paper, titled "3D Reconstruction with Spatial Memory," presents Spann3R, a novel deep learning-based method for online 3D reconstruction. Spann3R is trained on ordered or unordered image collections without prior knowledge of the scene or camera parameters and directly regresses point maps from images, which is expressed in a common coordinate system. It achieves this by utilizing a spatial memory, which learns to store and access all previously relevant 3D information. By removing the need for optimization-based global alignment, Spann3R facilitates real-time online incremental reconstruction. The authors demonstrate that Spann3R achieves competitive performance compared to prior methods while being significantly faster. For practitioners, this research offers a more efficient and scalable approach for online 3D reconstruction tasks that can be applied in various domains such as autonomous driving, virtual reality, and robotics. Read more
StyleRemix: Interpretable Authorship Obfuscation via Distillation and Perturbation of Style Elements Mitchell Gordon, yejinchoinka, Ximing, hallisky, jrfish This paper introduces StyleRemix, an interpretable and adaptable authorship obfuscation method that uses fine-grained style elements to rewrite text while preserving content and maintaining fluency. StyleRemix leverages pre-trained LoRA modules to rewrite text along specific style axes, such as formality or length, resulting in more robust obfuscation than prior methods. The authors introduce two new datasets: AuthorMix, a large-scale corpus of 30K texts from 14 authors and four domains, and DISC, a high-quality parallel corpus spanning seven stylistic axes, demonstrating the effectiveness of the model. StyleRemix outperforms prior methods in both automatic and human evaluation. This work has significant implications for practitioners working in anonymous writing, text anonymization, and privacy-preserving text generation. Read more
Scaling Up Diffusion and Flow-based XGBoost Models TaewooKim, JesseCresswell This paper investigates the engineering challenges and algorithmic improvements for applying XGBoost in diffusion and flow-matching models for tabular data generation. The authors identify and resolve several key implementation issues in prior work, including memory management, data duplication, and parallelization, enabling an efficient and scalable implementation of XGBoost-based generative models. Furthermore, they propose multi-output trees and early stopping as algorithmic improvements. The results show that the proposed method scales to much larger datasets than previously possible and leads to improvements in both model performance and resource efficiency. This work provides valuable insights for practitioners in the field of tabular generative modeling, offering practical guidance for engineering efficient and scalable models based on XGBoost. Read more
Meta Flow Matching: Integrating Vector Fields on the Wasserstein Manifold Leo J. Lee, Mathieu Blanchette, Brandon Amos, Xi Zhang, Lazar Atanackovic The paper proposes a new method, Meta Flow Matching (MFM), for learning the dynamics of interacting particles. Unlike current flow-based models, which are limited to a single initial population and predefined conditions, MFM can generalize to previously unseen populations by integrating along vector fields on the Wasserstein manifold. The authors demonstrate the ability of MFM to improve prediction of individual treatment responses on a large scale multi-patient single-cell drug screen dataset. This work may be relevant to practitioners in a variety of fields, such as AI engineers, data scientists, and bioinformaticians, who are interested in modeling complex systems with interacting particles. MFM can be used to develop more accurate and personalized treatment regimens for patients with various diseases. Read more

About

Automated summary of HuggingFace's Daily AI papers. All credits go to the HuggingFace community for curating this, and the research community for generously sharing.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%