Selected GenAI papers
created: Thu, 05 Dec 2024 08:44:29 GMT, modified: Sun, 05 Jan 2025 20:09:11 GMT
- https://arxiv.org/html/2410.18050v2
- LongRAG: A Dual-Perspective Retrieval-Augmented Generation Paradigm for Long-Context Question Answering
- https://arxiv.org/abs/2408.14717
- We propose Table-Augmented Generation (TAG), a unified and general-purpose paradigm for answering natural language questions over databases.
- https://github.com/TAG-Research/lotus
- https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
- GraphRAG introduction
- https://blog.cubed.run/the-insanity-of-relying-on-vector-embeddings-why-rag-fails-be73554490b2
- The Insanity of Relying on Vector Embeddings: Why RAG Fails
- https://www.databricks.com/blog/long-context-rag-capabilities-openai-o1-and-google-gemini
- The Long Context RAG Capabilities of OpenAI o1 and Google Gemini
- OpenAI o1 models show a consistent improvement over Anthropic and Google models on our long context RAG Benchmark up to 128k tokens.
- Despite lower performance than the SOTA OpenAI and Anthropic models, Google Gemini 1.5 models have consistent RAG performance at extreme context lengths of up to 2 million tokens.
- Models fail on long context RAG in highly distinct ways
- The Long Context RAG Capabilities of OpenAI o1 and Google Gemini